Kubernetes puzzle: Populate environment variable from file (mounted volume) - kubernetes

I have a Pod or Job yaml spec file (I can edit it) and I want to launch it from my local machine (e.g. using kubectl create -f my_spec.yaml)
The spec declares a volume mount. There would be a file in that volume that I want to use as value for an environment variable.
I want to make it so that the volume file contents ends up in the environment variable (without me jumping through hoops by somehow "downloading" the file to my local machine and inserting it in the spec).
P.S. It's obvious how to do that if you have control over the command of the container. But in case of launching arbitrary image, I have no control over the command attribute as I do not know it.
apiVersion: batch/v1
kind: Job
metadata:
generateName: puzzle
spec:
template:
spec:
containers:
- name: main
image: arbitrary-image
env:
- name: my_var
valueFrom: <Contents of /mnt/my_var_value.txt>
volumeMounts:
- name: my-vol
path: /mnt
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc

You can create deployment with kubectl endless loop which will constantly poll volume and update configmap from it. After that you can mount created configmap into your pod. It's a little bit hacky but will work and update your configmap automatically. The only requirement is that PV must be ReadWriteMany or ReadOnlyMany (but in that case you can mount it in read-only mode to all pods).
apiVersion: v1
kind: ServiceAccount
metadata:
name: cm-creator
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: cm-creator
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "update", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-creator
namespace: default
subjects:
- kind: User
name: system:serviceaccount:default:cm-creator
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: cm-creator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-creator
namespace: default
labels:
app: cm-creator
spec:
replicas: 1
serviceAccountName: cm-creator
selector:
matchLabels:
app: cm-creator
template:
metadata:
labels:
app: cm-creator
spec:
containers:
- name: cm-creator
image: bitnami/kubectl
command:
- /bin/bash
- -c
args:
- while true;
kubectl create cm myconfig --from-file=my_var=/mnt/my_var_value.txt --dry-run -o yaml | kubectl apply -f-;
sleep 60;
done
volumeMounts:
- name: my-vol
path: /mnt
readOnly: true
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc

Related

Populating a Containers environment values with mounted configMap in Kubernetes

I'm currently learning Kubernetes and recently learnt about using ConfigMaps for a Containers environment variables.
Let's say I have the following simple ConfigMap:
apiVersion: v1
data:
MYSQL_ROOT_PASSWORD: password
kind: ConfigMap
metadata:
creationTimestamp: null
name: mycm
I know that a container of some deployment can consume this environment variable via:
kubectl set env deployment mydb --from=configmap/mycm
or by specifying it manually in the manifest like so:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MYSQL_ROOT_PASSWORD
name: mycm
However, this isn't what I am after, since I'd to manually change the environment variables each time the ConfigMap changes.
I am aware that mounting a ConfigMap to the Pod's volume allows for the auto-updating of ConfigMap values. I'm currently trying to find a way to set a Container's environment variables to those stored in the mounted config map.
So far I have the following YAML manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 1
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
containers:
- image: mariadb
name: mariadb
resources: {}
args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
env:
- name: MYSQL_ROOT_PASSWORD
value: temp
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
I'm attempting to set the MYSQL_ROOT_PASSWORD to some temporary value, and then update it to mounted value as soon as the container starts via args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
As I somewhat expected, this didn't work, resulting in the following error:
/usr/local/bin/docker-entrypoint.sh: line 539: /export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD): No such file or directory
I assume this is because the volume is mounted after the entrypoint. I tried adding a readiness probe to wait for the mount but this didn't work either:
readinessProbe:
exec:
command: ["sh", "-c", "test -f /etc/config/MYSQL_ROOT_PASSWORD"]
initialDelaySeconds: 5
periodSeconds: 5
Is there any easy way to achieve what I'm trying to do, or is it impossible?
So I managed to find a solution, with a lot of inspiration from this answer.
Essentially, what I did was create a sidecar container based on the alpine K8s image that mounts the configmap and constantly watches for any changes, since the K8s API automatically updates the mounted configmap when the configmap is changed. This required the following script, watch_passwd.sh, which makes use of inotifywait to watch for changes and then uses the K8s API to rollout the changes accordingly:
update_passwd() {
kubectl delete secret mysql-root-passwd > /dev/null 2>&1
kubectl create secret generic mysql-root-passwd --from-file=/etc/config/MYSQL_ROOT_PASSWORD
}
update_passwd
while true
do
inotifywait -e modify "/etc/config/MYSQL_ROOT_PASSWORD"
update_passwd
kubectl rollout restart deployment $1
done
The Dockerfile is then:
FROM docker.io/alpine/k8s:1.25.6
RUN apk update && apk add inotify-tools
COPY watch_passwd.sh .
After building the image (locally in this case) as mysidecar, I create the ServiceAccount, Role, and RoleBinding outlined here, adding rules for deployments so that they can be restarted by the sidecar.
After this, I piece it all together to create the following YAML Manifest (note that imagePullPolicy is set to Never, since I created the image locally):
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 3
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
serviceAccountName: secretmaker
containers:
- image: mysidecar
name: mysidecar
imagePullPolicy: Never
command:
- /bin/sh
- -c
- |
./watch_passwd.sh $(DEPLOYMENT_NAME)
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
volumeMounts:
- name: config-volume
mountPath: /etc/config
- image: mariadb
name: mariadb
resources: {}
envFrom:
- secretRef:
name: mysql-root-passwd
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: secretmaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: mydb
name: secretmaker
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete", "list"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: mydb
name: secretmaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secretmaker
subjects:
- kind: ServiceAccount
name: secretmaker
namespace: default
---
It all works as expected! Hopefully this is able to help someone out in the future. Also, if anybody comes across this and has a better solution please feel free to let me know :)

Is there a kubernetes role definition to allow the command `kubectl rollout restart deploy <deployment>`?

I want a deployment in kubernetes to have the permission to restart itself, from within the cluster.
I know I can create a serviceaccount and bind it to the pod, but I'm missing the name of the most specific permission (i.e. not just allowing '*') to allow for the command
kubectl rollout restart deploy <deployment>
here's what I have, and ??? is what I'm missing
apiVersion: v1
kind: ServiceAccount
metadata:
name: restart-sa
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: restarter
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["list", "???"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: restart-sa
namespace: default
roleRef:
kind: Role
name: restarter
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- image: nginx
name: nginx
serviceAccountName: restart-sa
I believe the following is the minimum permissions required to restart a deployment:
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
resourceNames: [$DEPLOYMENT]
verbs: ["get", "patch"]
If you want permission to restart kubernetes deployment itself from within the cluster you need to set permission on rbac authorisation.
In the yaml file you have missed some specific permissions under Role:rules you need to add in the below format
verbs: ["get", "watch", "list"]
Instead of “Pod” you need to add “deployment” in the yaml file.
Make sure that you add “serviceAccountName: restart-sa” in the deployment yaml file under “spec:containers.” As mentioned below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
serviceAccountName: restart-sa
Then you can restart the deployment using the below command:
$ kubectl rollout restart deployment [deployment_name]

Using the rollout restart command in cronjob, in GKE

I want to periodically restart the deployment using k8s cronjob.
Please check what is the problem with the yaml file.
When I execute the command from the local command line, the deployment restarts normally, but it seems that the restart is not possible with cronjob.
e.g $ kubectl rollout restart deployment my-ingress -n my-app
my cronjob yaml file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deployment-restart
namespace: my-app
spec:
schedule: '0 8 */60 * *'
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: deployment-restart
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:latest
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/my-ingress -n my-app'
as David suggested run cron of kubectl is like by executing the command
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-jp-runner
containers:
- name: hello
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl rollout restart deployment my-ingress -n my-app
restartPolicy: OnFailure
i would also suggest you to check the role and service account permissions
example for ref :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: kubectl-cron
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- 'patch'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubectl-cron
namespace: default
subjects:
- kind: ServiceAccount
name: sa-kubectl-cron
namespace: default
roleRef:
kind: Role
name: kubectl-cron
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-kubectl-cron
namespace: default
---

Kubernetes: use environment variable/ConfigMap in PersistentVolume host path

Does anyone know if is it possible to use environment variable or ConfigMap in hostPath of PersistentVolume? Found that it's possible with Helm, envsubst etc. But I want to use only Kubernetes functions
I need to create a volume that will have a not static path.
Here is my PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: some-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "${PATH_FROM_ENV}/some-path"
You can't do it natively, but a combination of a kubernetes Job that reads from a configmap can do that for you.
We will create a Job with the proper RBAC permissions, this job uses kubectl image, reads the configmap, and passes it to the PV creation manifest.
Here are the manifests:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: pv-generator-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["persistentvolumes"]
verbs: ["create"]
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pv-geneartor-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: pv-generator-sa
namespace: default
roleRef:
kind: ClusterRole
name: pv-generator-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pv-generator-sa
---
apiVersion: batch/v1
kind: Job
metadata:
name: pv-generator
spec:
template:
spec:
serviceAccountName: pv-generator-sa
containers:
- name: kubectl
image: bitnami/kubectl
command:
- sh
- "-c"
- |
/bin/bash <<'EOF'
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: some-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: $(kubectl get cm path-configmap -ojsonpath="{.data.path}")/some-path
EOF
restartPolicy: Never
backoffLimit: 4
---
apiVersion: v1
kind: ConfigMap
metadata:
name: path-configmap
namespace: default
data:
path: /mypath

how to deploy a statefulset when a pod security policy is in place in kubernetes

I am trying to play around with PodSecurityPolicies in kubernetes so pods can't be created if they are using the root user.
This is my psp definition:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.restrictive
spec:
hostNetwork: false
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: RunAsAny
volumes:
- '*'
and this is my statefulset definition
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
securityContext:
#only takes integers.
runAsUser: 1000
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
When trying to create this statefulset I get
create Pod web-0 in StatefulSet web failed error: pods "web-0" is forbidden: unable to validate against any pod security policy:
It doesn't specify what policy am I violating, and since I am specifying I want to run this on user 1000, I am not running this as root (Hence my understanding is that this statefulset pod definition is not violating any rules defined in the PSP). There is no USER specified in the Dockerfile used for this image.
The other weird part, is that this works fine for standard pods (kind: Pod, instead of kind:Statefulset), for example, this works just fine, when the same PSP exists:
apiVersion: v1
kind: Pod
metadata:
name: my-nodejs
spec:
securityContext:
runAsUser: 1000
containers:
- name: my-node
image: node
ports:
- name: web
containerPort: 80
protocol: TCP
command:
- /bin/sh
- -c
- |
npm install http-server-g
npx http-server
What am I missing / doing wrong?
You seems to have forgotten about binding this psp to a service account.
You need to apply the following:
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-role
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- eks.restrictive
EOF
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: psp-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-role
subjects:
- kind: ServiceAccount
name: default
namespace: default
EOF
If you dont want to use the default account you can create a separate service account and bind the role to it.
Read more about it k8s documentation - pod-security-policy.