Running bash script in a kubernetes pod - kubernetes

I am trying to run an external bash script using the below yaml file.
The script is inside the /scripts/run.sh folder. I have also given the defaultMode: 0777
This is the error I get.
sh: 0: Can't open /scripts/run.sh
apiVersion: v1
data:
script.sh: |-
echo "Hello world!"
kubectl get pods
kind: ConfigMap
metadata:
name: script-configmap
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app: script-job
name: script-job
spec:
backoffLimit: 2
template:
spec:
containers:
- command:
- sh
- /scripts/run.sh
image: 'bitnami/kubectl:1.12'
name: script
volumeMounts:
- name: script-configmap
mountPath: /scripts
subPath: run.sh
readOnly: false
restartPolicy: Never
volumes:
- name: script-configmap
configMap:
name: script-configmap
defaultMode: 0777

The file name is script.sh and not run.sh
Try
containers:
- command:
- sh
- /scripts/script.sh

Related

capsh command inside kubernetes container

Pod is running state but logging inside the container and and running capsh --print, give error as:
sh: capsh: not found
Running same image with --cap-add SYS_ADMIN or --privileged as docker container gives desired output.
What changes in deployment or extra permissions are needed for it to work inside k8s container?
Deployment :
kind: Deployment
apiVersion: apps/v1
metadata:
name: sample-deployment
namespace: sample
labels:
app: sample
spec:
replicas: 1
selector:
matchLabels:
app: sample
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: alpine:3.17
command:
- sh
- -c
- while true; do echo Hello World; sleep 10; done
env:
- name: NFS_EXPORT_0
value: /var/opt/backup
- name: NFS_LOG_LEVEL
value: DEBUG
volumeMounts:
- name: backup
mountPath: /var/opt/backup
securityContext:
capabilities:
add: ["SYS_ADMIN"]
volumes:
- name: backup
persistentVolumeClaim:
claimName: sample-pvc

Kubernetes Deployment: Is there a way to copy a file from a mount directory to the root directory / after the deployment manifest is applied

The problem is your mount path can not be / but I need to move the demo.txt file into / once the container is created.
I have this sample deployment.yaml:
kind: ConfigMap
apiVersion: v1
metadata:
name: demo-configfile
data:
myfile: |
This my demo file's text info
This is just dummy text
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
selector:
matchLabels:
name: demo-configmaps-test
template:
metadata:
labels:
name: demo-configmaps-test
spec:
containers:
- name: demo-container
image: alpine
imagePullPolicy: Always
command: ['sh', '-c', 'sleep 36000']
volumeMounts:
- name: demo-files
mountPath: /demo/files
volumes:
- name: demo-files
configMap:
name: demo-configfile
items:
- key: myfile
path: demo.txt

Kubernetes initContainers to copy file and execute as part of Lifecycle Hook PostStart

I am trying to execute some scripts as part of statefulset deployment kind. This script I have added as configmap and I use this as volumeMount inside the pod definition. I use the lifecycle poststart exec command to execute this script. It fails with the permission issue.
based on certain articles, I found that we should copy this file as part of InitContainer and then use that (I am not sure why should we do and what will make a difference)
Still, I tried it and that also gives the same error.
Here is my ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap-initscripts
data:
poststart.sh: |
#!/bin/bash
echo "It`s done"
Here is my StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
spec:
....
serviceName: postgres-service
replicas: 1
template:
...
spec:
initContainers:
- name: "postgres-ghost"
image: alpine
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
containers:
- name: postgres
image: postgres
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "/scripts/poststart.sh" ]
ports:
- containerPort: 5432
name: dbport
....
volumeMounts:
- mountPath: /scripts
name: postgres-scripts
volumes:
- name: postgres-scripts
configMap:
name: postgres-configmap-initscripts
items:
- key: poststart.sh
path: poststart.sh
The error I am getting:
postStart hook will be call at least once but may be call more than once, this is not a good place to run script.
The poststart.sh file that mounted as ConfigMap will not have execute mode hence the permission error.
It is better to run script in initContainers, here's an quick example that do a simple chmod; while in your case you can execute the script instead:
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: busybox
data:
test.sh: |
#!/bin/bash
echo "It's done"
---
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
volumes:
- name: scripts
configMap:
name: busybox
items:
- key: test.sh
path: test.sh
- name: runnable
emptyDir: {}
initContainers:
- name: prepare
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["cp /scripts/test.sh /runnable/test.sh && chmod +x /runnable/test.sh"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c"]
args: ["while :; do . /runnable/test.sh; sleep 1; done"]
volumeMounts:
- name: scripts
mountPath: /scripts
- name: runnable
mountPath: /runnable
EOF

Use a single volume to mount multiple files from secrets or configmaps

I use one secret to store multiple data items like this:
apiVersion: v1
kind: Secret
metadata:
name: my-certs
namespace: my-namespace
data:
keystore.p12: LMNOPQRSTUV
truststore.p12: ABCDEFGHIJK
In my Deployment I mount them into files like this:
volumeMounts:
- mountPath: /app/truststore.p12
name: truststore-secret
subPath: truststore.p12
- mountPath: /app/keystore.p12
name: keystore-secret
subPath: keystore.p12
volumes:
- name: truststore-secret
secret:
items:
- key: truststore.p12
path: truststore.p12
secretName: my-certs
- name: keystore-secret
secret:
items:
- key: keystore.p12
path: keystore.p12
secretName: my-certs
This works as expected but I am wondering whether I can achieve the same result of mounting those two secret as files with less Yaml? For example volumes use items but I could not figure out how to use one volume with multiple items and mount those.
Yes, you can reduce your yaml with Projected Volume.
Currently, secret, configMap, downwardAPI, and serviceAccountToken volumes can be projected.
TL;DR use this structure in your Deployment:
spec:
containers:
- name: {YOUR_CONTAINER_NAME}
volumeMounts:
- name: multiple-secrets-volume
mountPath: "/app"
readOnly: true
volumes:
- name: multiple-secrets-volume
projected:
sources:
- secret:
name: my-certs
And here's the full reproduction of your case, first I registered your my-certs secret:
user#minikube:~/secrets$ kubectl get secret my-certs -o yaml
apiVersion: v1
data:
keystore.p12: TE1OT1BRUlNUVVY=
truststore.p12: QUJDREVGR0hJSks=
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"keystore.p12":"TE1OT1BRUlNUVVY=","truststore.p12":"QUJDREVGR0hJSks="},"kind":"Secret","metadata":{"annotations":{},"name":"my-certs","namespace":"default"}}
creationTimestamp: "2020-01-22T10:43:51Z"
name: my-certs
namespace: default
resourceVersion: "2759005"
selfLink: /api/v1/namespaces/default/secrets/my-certs
uid: d785045c-2931-434e-b6e1-7e090fdd6ff4
Then created a pod to test the access to the secret, this is projected.yaml:
user#minikube:~/secrets$ cat projected.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-projected-volume
spec:
containers:
- name: test-projected-volume
image: busybox
args:
- sleep
- "86400"
volumeMounts:
- name: multiple-secrets-volume
mountPath: "/app"
readOnly: true
volumes:
- name: multiple-secrets-volume
projected:
sources:
- secret:
name: my-certs
user#minikube:~/secrets$ kubectl apply -f projected.yaml
pod/test-projected-volume created
Then tested the access to the keys:
user#minikube:~/secrets$ kubectl exec -it test-projected-volume -- /bin/sh
/ # ls
app bin dev etc home proc root sys tmp usr var
/ # cd app
/app # ls
keystore.p12 truststore.p12
/app # cat keystore.p12
LMNOPQRSTUV/app #
/app # cat truststore.p12
ABCDEFGHIJK/app #
/app # exit
You have the option to use a single secret with many data lines, as you requested or you can use many secrets from your base in your deployment in the following model:
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: SECRET_1
- secret:
name: SECRET_2

Can we use same Configmap for different volume mounts?

Two pods are running and have different volume mounts, but there is a need of using the same configmap in the 2 running pods.
Sure you can do that. You can mount same ConfigMap into different volume. You can take a look into configure-pod-configmap.
Say, your ConfigMap is like following:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
And two pods:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-01
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-02
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
Now see the logs after creating the above ConfigMap and two Pods:
# for 1st Pod
$ kubectl logs -f dapi-test-pod-01
SPECIAL_LEVEL
SPECIAL_TYPE
# for 2nd Pod
$ kubectl logs -f dapi-test-pod-02
SPECIAL_LEVEL
SPECIAL_TYPE