Can we use same Configmap for different volume mounts? - kubernetes

Two pods are running and have different volume mounts, but there is a need of using the same configmap in the 2 running pods.

Sure you can do that. You can mount same ConfigMap into different volume. You can take a look into configure-pod-configmap.
Say, your ConfigMap is like following:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
And two pods:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-01
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-02
spec:
containers:
- name: test-container
image: busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
Now see the logs after creating the above ConfigMap and two Pods:
# for 1st Pod
$ kubectl logs -f dapi-test-pod-01
SPECIAL_LEVEL
SPECIAL_TYPE
# for 2nd Pod
$ kubectl logs -f dapi-test-pod-02
SPECIAL_LEVEL
SPECIAL_TYPE

Related

How to pass environment variables to a script present in ConfigMap while accessing it as a volume in Kubernetes

I have the following ConfigMap which is having a variable called VAR. This variable should get the value from the workflow while accessing it as a volume
apiVersion: v1
kind: ConfigMap
metadata:
name: test-pod-cfg
data:
test-pod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: ubuntu
command: ["/busybox/sh", "-c", "echo $VAR"]
Here is the argo workflow which is fetching script test-pod.yaml in ConfigMap and adding it as a volume to container. In this how to pass Environment variable VAR to the ConfigMap for replacing it dynamically
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: test-wf-
spec:
entrypoint: main
templates:
- name: main
container:
image: "ubuntu"
command: ["/bin/sh", "-c", "cat /mnt/vc/test"]
volumeMounts:
- name: vc
mountPath: "/mnt/vc"
volumes:
- name: vc
configMap:
name: test-pod-cfg
items:
- key: test-pod.yaml
path: test
To mount the ConfigMap as a volume and make the environment variable VAR available to the container, you will need to add a volume to the pod's spec and set the environment variable in the container's spec.
In the volume spec, you will need to add the ConfigMap as a volume source and set the path to the file containing the environment variable. For example:
spec:
entrypoint: test-pod
templates:
- name: test-pod
container:
image: ubuntu
command: ["/busybox/sh", "-c", "echo $VAR"]
volumeMounts:
- name: config
mountPath: /etc/config
env:
- name: VAR
valueFrom:
configMapKeyRef:
name: test-pod-cfg
key: test-pod.yaml
volumes:
- name: config
configMap:
name: test-pod-cfg
The environment variable VAR will then be available in the container with the value specified in the ConfigMap.
For more information follow this official doc.

How can i remove an element in Deployment volumeMounts with kubectl Patch command?

I have a Deployment like this:
apiVersion: apps/v1
kind: Deployment
spec:
template:
volumeMounts:
- mountPath: /home
name: john-webos-vol
subPath: home
- mountPath: /pkg
name: john-vol
readOnly: true
subPath: school
I want to change the Deloyment with the kubectl patch command, so it has the following volumeMounts in the PodTemplate instead:
target.yaml:
apiVersion: apps/v1
kind: Deployment
spec:
template:
volumeMounts:
- mountPath: /home
name: john-webos-vol
subPath: home
I used the below command, but it didn't work.
kubectl patch deployment sample --patch "$(cat target.yaml)"
Can anyone give me some advice?
you can't do this with kubectl patch. The patch you did in your problem is called a strategic merge patch. the patch can't replace things, instead with this patch you can only add things.
like if you have intially one container in your podspec but you need to add another container. you can use patch here to add another container. but if you have two container and need to remove one you can't do this with this kind of patch.
if you want to this with patch you need to use retainKeys. Ref
let me explain how you can do this in another simple way. lets assume you have applied below test.yaml with
kubectl apply -f test.yaml
test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /home
name: john-webos-vol
subPath: home
- mountPath: /pkg
name: john-vol
readOnly: true
subPath: school
volumes:
- name: john-webos-vol
emptyDir: {}
- name: john-vol
emptyDir: {}
now you need update this one. and the updated one target.yaml will remove one of volume .
target.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /pkg
name: john-vol
readOnly: true
subPath: school
volumes:
- name: john-vol
emptyDir: {}
you can just use:
kubectl apply -f target.yaml
this one will update your deployment with new configuration
You can use JSON patch http://jsonpatch.com/
Remove specific volume mount
kubectl patch deployment <NAME> --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/volumeMounts/0"}]'
Replace volume mounts with what you need
kubectl patch deployment <NAME> --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/volumeMounts", "value": [{"mountPath": "/home", "name": "john-webos-vol", "subPath": "home"}]}]'
Kubectl cheet sheet for more info: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#patching-resources
You can leverage the apply command, by getting the deployment definition in JSON format, modifying (in your case removing) this section
- mountPath: /pkg
name: john-vol
readOnly: true
subPath: school
with sed or a simmilar utility and then applying it back:
kubectl get deployment <myDeployment> -n <myNamespace> | sed -z -s -E -b -e 's/REGEX_TO_MATCH_PART_OF_DEPLOYMENT_TO_REMOVE//g' | kubectl apply -f -

kubernetes deployment mounts secret as a folder instead of a file

I am having a config file as a secret in kubernetes and I want to mount it into a specific location inside the container. The problem is that the volume that is created inside the container is a folder instead of a file with the content of the secrets in it. Any way to fix it?
My deployment looks like this:
kind: Deployment
apiVersion: apps/v1
metadata:
name: jetty
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jetty
template:
metadata:
labels:
app: jetty
spec:
containers:
- name: jetty
image: quay.io/user/jetty
ports:
- containerPort: 8080
volumeMounts:
- name: config-properties
mountPath: "/opt/jetty/config.properties"
subPath: config.properties
- name: secrets-properties
mountPath: "/opt/jetty/secrets.properties"
- name: doc-path
mountPath: /mnt/storage/
resources:
limits:
cpu: '1000m'
memory: '3000Mi'
requests:
cpu: '750m'
memory: '2500Mi'
volumes:
- name: config-properties
configMap:
name: jetty-config-properties
- name: secrets-properties
secret:
secretName: jetty-secrets
- name: doc-path
persistentVolumeClaim:
claimName: jetty-docs-pvc
imagePullSecrets:
- name: rcc-quay
Secrets vs ConfigMaps
Secrets let you store and manage sensitive information (e.g. passwords, private keys) and ConfigMaps are used for non-sensitive configuration data.
As you can see in the Secrets and ConfigMaps documentation:
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.
Mounting Secret as a file
It is possible to create Secret and pass it as a file or multiple files to Pods.
I've create simple example for you to illustrate how it works.
Below you can see sample Secret manifest file and Deployment that uses this Secret:
NOTE: I used subPath with Secrets and it works as expected.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
- name: secrets-files
mountPath: "/mnt/secret.file2" # "secret.file2" file will be created in "/mnt" directory
subPath: secret.file2
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
Note: Secret should be created before Deployment.
After creating Secret and Deployment, we can see how it works:
$ kubectl get secret,deploy,pod
NAME TYPE DATA AGE
secret/my-secret Opaque 2 76s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 76s
NAME READY STATUS RESTARTS AGE
pod/nginx-7c67965687-ph7b8 1/1 Running 0 76s
$ kubectl exec nginx-7c67965687-ph7b8 -- ls /mnt
secret.file1
secret.file2
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file1
secretFile1
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file2
secretFile2
Projected Volume
I think a better way to achieve your goal is to use projected volume.
A projected volume maps several existing volume sources into the same directory.
In the Projected Volume documentation you can find detailed explanation but additionally I created an example that might help you understand how it works.
Using projected volume I mounted secret.file1, secret.file2 from Secret and config.file1 from ConfigMap as files into the Pod.
---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.file1: |
configFile1
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: all-in-one
mountPath: "/config-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: my-secret
items:
- key: secret.file1
path: secret-dir1/secret.file1
- key: secret.file2
path: secret-dir2/secret.file2
- configMap:
name: my-config
items:
- key: config.file1
path: config-dir1/config.file1
We can check how it works:
$ kubectl exec nginx -- ls /config-volume
config-dir1
secret-dir1
secret-dir2
$ kubectl exec nginx -- cat /config-volume/config-dir1/config.file1
configFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir1/secret.file1
secretFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir2/secret.file2
secretFile2
If this response doesn't answer your question, please provide more details about your Secret and what exactly you want to achieve.

Get value of configMap from mountPath

I created configmap this way.
kubectl create configmap some-config --from-literal=key4=value1
After that i created pod which looks like this
.
I connect to this pod this way
k exec -it nginx-configmap -- /bin/sh
I found the folder /some/path but i could get value from key4.
If you refer to your ConfigMap in your Pod this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: config-volume
volumes:
- name: config-volume
configMap:
name: some-config
it will be available in your Pod as a file /var/www/html/key4 with the content of value1.
If you rather want it to be available as an environment variable you need to refer to it this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
envFrom:
- configMapRef:
name: some-config
As you can see you don't need for it any volumes and volume mounts.
Once you connect to such Pod by running:
kubectl exec -ti mypod -- /bin/bash
You will see that your environment variable is defined:
root#mypod:/# echo $key4
value1

GKE Cannot pull image, even though imagesPullSecret is defined

In Google Kubernetes Engine I created a POC cluster for our company which worked flawlessly. But now, when I try to create our production environment I cannot seem to get the imagesPullSecrets to work, it's the exact same credentials as in the POC, Same helm chart and the exact same regcred yaml file.
Yet i keep getting the classical:
Back-off pulling image "registry.company.co/frontend/company-web/upload": ImagePullBackOff
Pulling manually on the node works with the same credentials as those that i supplied in the imagesPullSecret
I've tried defining the imagesPullSecret both on a chart level and on the Service Account
I've verified the secret format and directly copied the credentials there when trying the manual pulls
GKE picks up regcred and shows it in the deployment
Regcred generated by kubectl create secret docker-registry regcred --docker-server="registry.company.co" --docker-username="gitlab" --docker-password="[PASSWORD]"
regcred secret
kind: Secret
apiVersion: v1
metadata:
name: regcred
namespace: default
data:
.dockerconfigjson: eyJhdXRocyI6eyJyZWdpc3RyeS5jb21wYW55LmNvIjp7InVzZXJuYW1lIjoiZ2l0bGFiIiwicGFzc3dvcmQiOiJbUkVEQUNURURdIiwiYXV0aCI6IloybDBiR0ZpT2x0QmJITnZJRkpsWkdGamRHVmtYUT09In19fQ==
type: kubernetes.io/dockerconfigjson
Service Account
kind: ServiceAccount
apiVersion: v1
metadata:
name: default
namespace: default
secrets:
- name: default-token-jktj5
imagePullSecrets:
- name: regcred
Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: gcr.io/google_containers/volume-nfs:latest
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
initContainers:
- name: init-volume-perms
imagePullPolicy: Always
image: alpine
command: ["/bin/sh", "-c"]
args: ["mkdir /mnt/company-logos; mkdir /mnt/uploads; chown -R 1337:1337 /mnt"]
volumeMounts:
- mountPath: /mnt
name: mypvc
- name: company-web-uploads
image: registry.company.co/frontend/company-web/uploads
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/company/web/uploads
subPath: uploads
name: mypvc
- name: company-logos
image: registry.company.co/backend/pdf-service/company-logos
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/company/shared/company-logos
subPath: company-logos
name: mypvc
volumes:
- name: mypvc
gcePersistentDisk:
pdName: gke-nfs-disk
fsType: ext4
I've looked around, following different guides from the ground up to no success.
So I'm at a total loss as to what to do.
Default namespace all around
It may be because of namespace issue. Can you verify a few things
Are you using default namespace at both places?
K8S version difference between poc and prod.
Can you recreate working secret by something like kubectl get secret default-token-jktj5 -o yaml > imagepullsecret.yaml. Edit the yaml file to remove revision and other status information. Apply the same to prod
I have seen this issue in GKE because of multiline secret conversion to base64. Ensure secrets are matching between environments.