ConfigMap volume is not mounting as volume along with secret - kubernetes

I have references to both Secrets and ConfigMaps in my Deployment YAML file. The Secret volume is mounting as expected, but not the ConfigMap.
I created a ConfigMap with multiple files and when I do a kubectl get configmap ... it shows all of the expected values. Also, when I create just ConfigMaps it's mounting the volume fine, but not along with a Secret.
I have tried different scenarios of having both in same directory to separating them but dont seem to work.
Here is my YAML. Am I referencing the ConfigMap correctly?
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
replicas: 1 # tells deployment to run 1 pod
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: XXX/yyy/image-50
ports:
- containerPort: 9009
protocol: TCP
volumeMounts:
- mountPath: /shared
name: configs-volume
readOnly: true
volumeMounts:
- mountPath: data/secrets
name: atp-secret
readOnly: true
volumes:
- name: configs-volume
configMap:
name: prompts-config
key: application.properties
volumes:
- name: atp-secret
secret:
defaultMode: 420
secretName: dev-secret
restartPolicy: Always
imagePullSecrets:
- name: ocirsecrets

You have two separate lists of volumes: and also two separate lists of volumeMounts:. When Kubernetes tries to find the list of volumes: in the Pod spec, it finds the last matching one in each set.
volumes:
- name: configs-volume
volumes: # this completely replaces the list of volumes
- name: atp-secret
In both cases you need a single volumes: or volumeMounts: key, and then multiple list items underneath those keys.
volumes:
- name: configs-volume
configMap: { ... }
- name: atp-secret # do not repeat volumes: key
secret: { ... }
containers:
- name: hello
volumeMounts:
- name: configs-volume
mountPath: /shared
- name: atp-secret # do not repeat volumeMounts: key
mounthPath: /data/secrets

Related

Confimap volume with defaultMode no working?

yaml file like below
apiVersion: v1
kind: Pod
metadata:
name: fortune-configmap-volume
spec:
containers:
- image: luksa/fortune:env
env:
- name: INTERVAL
valueFrom:
configMapKeyRef:
name: fortune-config
key: sleep-interval
name: html-generator
volumeMounts:
- name: html
mountPath: /var/htdocs
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: config
mountPath: /etc/nginx/conf.d
volumes:
- name: html
emptyDir: {}
- name: config
configMap:
name: fortune-config
defaultMode: 0777
As you can see,I set the configMap volume defaultMode 0777, when I try to modify file in container path /etc/nginx/conf.d, it told me operation is not allowed, but why?
https://github.com/kubernetes/kubernetes/issues/62099
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location.

What are the differences of defining configMap in volumes?

I'm just curious about what are the differences between these 2 ways of defining ConfigMap in volumes section?
p.s. test-config is including config.json file.
newpod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: configmappod
spec:
containers:
- name: configmapcontainer
image: blue
volumeMounts:
- name: config-vol
mountPath: "/config/newConfig.json"
subPath: "config.json"
readOnly: true
volumes:
- name: config-vol
projected:
sources:
- configMap:
name: test-config
items:
- key: config.json
path: config.json
newpod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: configmappod
spec:
containers:
- name: configmapcontainer
image: blue
volumeMounts:
- name: config-vol
mountPath: "/config/newConfig.json"
subPath: "config.json"
readOnly: true
volumes:
- name: config-vol
configMap:
name: test-config
No different, they yield the same result. By the way, the readOnly attribute is redundant, it does not have any effect in both cases.

Use a single volume to mount multiple files from secrets or configmaps

I use one secret to store multiple data items like this:
apiVersion: v1
kind: Secret
metadata:
name: my-certs
namespace: my-namespace
data:
keystore.p12: LMNOPQRSTUV
truststore.p12: ABCDEFGHIJK
In my Deployment I mount them into files like this:
volumeMounts:
- mountPath: /app/truststore.p12
name: truststore-secret
subPath: truststore.p12
- mountPath: /app/keystore.p12
name: keystore-secret
subPath: keystore.p12
volumes:
- name: truststore-secret
secret:
items:
- key: truststore.p12
path: truststore.p12
secretName: my-certs
- name: keystore-secret
secret:
items:
- key: keystore.p12
path: keystore.p12
secretName: my-certs
This works as expected but I am wondering whether I can achieve the same result of mounting those two secret as files with less Yaml? For example volumes use items but I could not figure out how to use one volume with multiple items and mount those.
Yes, you can reduce your yaml with Projected Volume.
Currently, secret, configMap, downwardAPI, and serviceAccountToken volumes can be projected.
TL;DR use this structure in your Deployment:
spec:
containers:
- name: {YOUR_CONTAINER_NAME}
volumeMounts:
- name: multiple-secrets-volume
mountPath: "/app"
readOnly: true
volumes:
- name: multiple-secrets-volume
projected:
sources:
- secret:
name: my-certs
And here's the full reproduction of your case, first I registered your my-certs secret:
user#minikube:~/secrets$ kubectl get secret my-certs -o yaml
apiVersion: v1
data:
keystore.p12: TE1OT1BRUlNUVVY=
truststore.p12: QUJDREVGR0hJSks=
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"keystore.p12":"TE1OT1BRUlNUVVY=","truststore.p12":"QUJDREVGR0hJSks="},"kind":"Secret","metadata":{"annotations":{},"name":"my-certs","namespace":"default"}}
creationTimestamp: "2020-01-22T10:43:51Z"
name: my-certs
namespace: default
resourceVersion: "2759005"
selfLink: /api/v1/namespaces/default/secrets/my-certs
uid: d785045c-2931-434e-b6e1-7e090fdd6ff4
Then created a pod to test the access to the secret, this is projected.yaml:
user#minikube:~/secrets$ cat projected.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-projected-volume
spec:
containers:
- name: test-projected-volume
image: busybox
args:
- sleep
- "86400"
volumeMounts:
- name: multiple-secrets-volume
mountPath: "/app"
readOnly: true
volumes:
- name: multiple-secrets-volume
projected:
sources:
- secret:
name: my-certs
user#minikube:~/secrets$ kubectl apply -f projected.yaml
pod/test-projected-volume created
Then tested the access to the keys:
user#minikube:~/secrets$ kubectl exec -it test-projected-volume -- /bin/sh
/ # ls
app bin dev etc home proc root sys tmp usr var
/ # cd app
/app # ls
keystore.p12 truststore.p12
/app # cat keystore.p12
LMNOPQRSTUV/app #
/app # cat truststore.p12
ABCDEFGHIJK/app #
/app # exit
You have the option to use a single secret with many data lines, as you requested or you can use many secrets from your base in your deployment in the following model:
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: SECRET_1
- secret:
name: SECRET_2

kubernetes deployment file inject environment variables on a pre script

I have an elixir app connection to postgres using sql proxy
here is my deployment.yaml I deploy on kubernetes and works well,
the postgres connection password and user name are taken in the image from the environment variables in the yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
namespace: production
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: my-app
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: my-app
image: my-image:1.0.1
volumeMounts:
- name: secrets-volume
mountPath: /secrets
readOnly: true
- name: config-volume
mountPath: /beamconfig
ports:
- containerPort: 80
args:
- foreground
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: POSTGRES_USERNAME
value: postgres
- name: POSTGRES_PASSWORD
value: 123456
# proxy_container
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=my-project:region:my-postgres-instance=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: cloudsql
mountPath: /cloudsql
# volumes
volumes:
- name: secrets-volume
secret:
secretName: gcloud-json
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: cloudsql
emptyDir:
now due to security requirements I'd like to put sensitive environments encrypted, and have a script decrypting them
my yaml file would look like this:
env:
- name: POSTGRES_HOSTNAME
value: localhost
- name: ENCRYPTED_POSTGRES_USERNAME
value: hgkdhrkhgrk
- name: ENCRYPTED_POSTGRES_PASSWORD
value: fkjeshfke
then I have script that would run on all environments with prefix ENCRYPTED_ , will decrypt them and insert the dycrpted value under the environment variable without the ENCRYPTED_ prefix
is there a way to do that?
the environments variables should be injected before the image starts running
another requirement is that the pod running the image would decrypt the variables - since its the only one which has permissions to do it (working with work load identity)
something like:
- command:
- sh
- /decrypt_and_inject_environments.sh

RabbitMQ configuration files is not coping in the Kubernetes deployment

I'm trying to deploy RabbitMQ on the Kubernetes cluster and using the initcontainer to copy a file from ConfigMap. However, the file is not copying after POD is in a running state.
Initially, I have tried without using an initcontainer, but I was getting an error like "touch: cannot touch '/etc/rabbitmq/rabbitmq.conf': Read-only file system."
kind: Deployment
metadata:
name: broker01
namespace: s2sdocker
labels:
app: broker01
spec:
replicas: 1
selector:
matchLabels:
app: broker01
template:
metadata:
name: broker01
labels:
app: broker01
spec:
initContainers:
- name: configmap-copy
image: busybox
command: ['/bin/sh', '-c', 'cp /etc/rabbitmq/files/definitions.json /etc/rabbitmq/']
volumeMounts:
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
- name: pre-install
mountPath: /etc/rabbitmq
containers:
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
volumes:
- name: pre-install
emptyDir: {}
- name: broker01-data
persistentVolumeClaim:
claimName: broker01-data-pvc
- name: broker01-log
persistentVolumeClaim:
claimName: broker01-log-pvc
- name: broker01-definitions
configMap:
name: broker01-definitions-cm
The file "definitions.json" should be copied to /etc/reabbitmq folder. I have followed "Kubernetes deployment read-only filesystem error". But issue did not fix.
After making changes in the "containers volumeMount section," I was able to copy the file on to /etc/rabbitmq folder.
Please find a modified code here.
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: pre-install
mountPath: /etc/rabbitmq
can you check permissions on /etc/rabbitmq/.
does the user has permission to copy the file to above location?
- name: pre-install
mountPath: /etc/rabbitmq
I see that /etc/rabbitmq is a mount point. it is a ready only file system and hence the file copy is failed.
can you update the permissions on 'pre-install' mount point