mount dot files in home folder of pod blowing away folder - kubernetes

I am building containers that are designed to build and publish things. So i need to configure the .pypirc, etc files.
I am trying to do it with a configmap. Creating a configmap with each of the dot files is easy enough, my problem is mapping it into the pod.
apiVersion: v1
kind: Pod
metadata:
generateName: jnlp-
labels:
name: jnlp
label: jnlp
spec:
containers:
- name: jnlp
image: '(redacted)/agent/cbuild-okd311-cmake3-py36:0.0.2.7'
tty: true
securityContext:
runAsUser: 1001
allowPrivilegeEscalation: false
volumeMounts:
- name: dotfiles
mountPath: "/home/jenkins"
volumes:
- name: dotfiles
configMap:
name: jenkins.user.dotfiles
heres my map (redacted)
apiVersion: v1
data:
.pypirc: |-
[distutils]
index-servers = local
[local]
repository: https://(redacted)/api/pypi/pypi
username: (redacted)
password: (redacted)
.p4config: |-
P4CLIENT=perf_pan
P4USER=(redacted)
P4PASSWD=(redacted)
P4PORT=p4proxy-pa.(redacted):1666
P4HOST=(redacted).com%
kind: ConfigMap
metadata:
name: jenkins.user.dotfiles
namespace: jenkins
im pretty sure that the mount command is blowing away everything else thats in the /home/jenkins folder. But im trying to come up with a mount that will create a dot file for each entry in my configmap.
Thanks

Your suspicion is correct. What you can use to fix that is subPath https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath
But the downside is you do need a volumeMount entry for each of the dotfiles.

Related

How to create a volume that mounts a file which it's path configured in a ConfigMap

I'll describe what is my target then show what I had done to achieve it... my goal is to:
create a configmap that holds a path for properties file
create a deployment, that has a volume mounting the file from the path configured in configmap
What I had done:
ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: "my.properties"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-client-deployment
spec:
selector:
matchLabels:
app: my-client
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: my-client
spec:
containers:
- name: my-client-container
image: {{ .Values.image.client}}
imagePullPolicy: {{ .Values.pullPolicy.client }}
ports:
- containerPort: 80
env:
- name: MY_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: my-configmap
key: my_properties_file_name
volumeMounts:
- name: config
mountPath: "/etc/config"
readOnly: true
imagePullSecrets:
- name: secret-private-registry
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: my-configmap
# An array of keys from the ConfigMap to create as files
items:
- key: "my_properties_file_name"
path: "my.properties"
The result is having a file namedmy.properties under /etc/config, BUT the content of that file is "my.properties" (as it was indicated as the file name in the configmap), and not the content of properties file as I have it actually in my localdisk.
How can I mount that file, using it's path configured in a configmap?
Put the content of the my.properties file directly inside the ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
my_properties_file_name: |
This is the content of the file.
It supports multiple lines but do take care of the indentation.
Or you can also use a kubectl create configmap command:
kubectl create configmap my-configmap --from-file=my_properties_file_name=./my.properties
In either method, you are actually passing the snapshot of the content of the file on the localdisk to kubernetes to store. Any changes you make to the file on the localdisk won't be reflected unless you re-create the configmap.
The design of kubernetes allows running kubectl command against kubernetes cluster located on the other side of the globe so you can't simply mount a file on your localdisk to be accessible in realtime by the cluster. If you want such mechanism, you can't use a ConfigMap, but instead you would need to setup a shared volume that is mounted by both your local machine and the cluster for example using a NFS server.

Write to Secret file in pod

I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?
As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;
You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/

How to mount data file in kubernetes via pvc?

I want to persistent data file via pvc with glusterfs in kubernetes, I mount the diretory and it'll work, but when I try to mount the file, it'll fail, because the file was mounted to the directory type, how can I mount the data file in k8s ?
image info:
how can I mount the data file in k8s ?
This is often application specific and there are several ways to do so, but mainly you want to read about subPath.
Generally, you can chose to:
use subPath to separate config files.
Mount volume/path as directory at some other location and then link file to specific place within pod (in rare cases that mixing with other config files or directory permission in same dir is presenting an issue, or boot/start policy of application prevents files from being mounted at the pod start but are required to be present after some initialization is performed, really edge cases).
Use ConfigMaps (or even Secrets) to hold configuration files. Note that if using subPath with configMap and Secret pod won't get updates there automatically, but is more common way of handling configuration files, and your conf/interpreter.json looks like a fine example...
Notes to keep in mind:
Mounting is "overlaping" underlying path, so you have to mount file up to the point of file in order to share its folder with other files. Sharing up to a folder would get you folder with single file in it which is usually not what is required.
If you use ConfigMaps then you have to reference individual file with subPath in order to mount it, even if you have a single file in ConfigMap. Something like this:
containers:
- volumeMounts:
- name: my-config
mountPath: /my-app/my-config.json
subPath: config.json
volumes:
- name: my-config
configMap:
name: cm-my-config-map-example
Edit:
Full example of mounting a single example.sh script file to /bin directory of a container using ConfigMap.
This example you can adjust to suit your needs of placing any file with any privilege in any desired folder. Replace my-namespace with any desired (or remove completely for default one)
Config map:
kind: ConfigMap
apiVersion: v1
metadata:
namespace: my-namespace
name: cm-example-script
data:
example-script.sh: |
#!/bin/bash
echo "Yaaaay! It's an example!"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: my-namespace
name: example-deployment
labels:
app: example-app
spec:
selector:
matchLabels:
app: example-app
strategy:
type: Recreate
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- mountPath: /bin/example-script.sh
subPath: example-script.sh
name: example-script
volumes:
- name: example-script
configMap:
name: cm-example-script
defaultMode: 0744
Full example of mounting a single test.txt file to /bin directory of a container using persistent volume (file already exists in root of volume).
However, if you wish to mount with persistent volume instead configMap, here is another example of mounting in much the same way (test.txt is mounted in /bin/test.txt)... Note two things: test.txt must exist on PV and that I'm using statefulset just to run with automatically provisioned pvc, and you can adjust accordingly...
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: my-namespace
name: ss-example-file-mount
spec:
serviceName: svc-example-file-mount
replicas: 1
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- image: ubuntu:16.04
name: example-app-container
stdin: true
tty: true
volumeMounts:
- name: persistent-storage-example
mountPath: /bin/test.txt
subPath: test.txt
volumeClaimTemplates:
- metadata:
name: persistent-storage-example
spec:
storageClassName: sc-my-storage-class-for-provisioning-pv
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi

Use two different configmaps for the same pods as volume mounts

I am trying to write a deployment for a k8s pod.
I am having the following in the deploy.yaml file
apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: __DEPLOY_NAME__-__ENV__
namespace: __RG_NAME__
spec:
replicas: 1
template:
metadata:
labels:
app: __DEPLOY_NAME__-__ENV__
containers:
- name: __DEPLOY_NAME__-__ENV__
image: __CONTAINER_REGISTRY__/__IMAGE_NAME__
env:
- name: NODE_ENV
value: __ENV__
imagePullPolicy: Always
volumeMounts:
- name: config-volume
mountPath: /etc/config
ports:
- containerPort: __PORT__
volumes:
- name: config-volume
configMap:
name: config
configMap:
name: oauth
I trying to use two different config maps named 'config' and 'oauth' as volume mounts in the same pod. When I tried the above code I got the following error.
error validating data: found invalid field volumes for v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
I am not sure if what I want to achieve is feasible and if not then how should I give the volume mounts.
First: fix indentation on your volumes block, it should be two spaces less (not a child of containers: but sibling of it.
Second: you should create two distinct volumes with distinct names and then have a volume mount for each one of them
Third: if you need to merge files from them you might want to try mounting particular files with subPath

Running a shell script to initialize pods in kubernetes (initializing my db cluster)

I need to be able to run a shell script (my script is for initializing my db cluster) to initialize my pods in Kubernetes,
I don't want to create my script inside my dockerfile because I get my image directly from the web so I don't want to touch it.
So I want to know if there is a way to get my script in to one of my volumes so I can execute it like that:
spec:
containers:
- name: command-demo-container
image: debian
command: ["./init.sh"]
restartPolicy: OnFailure
It depends what exactly does your init script do. But the InitContainers should be helpful in such cases. Init containers are run before the main application container is started and can do some preparation work such as create configuration files etc.
You would still need your own Docker image, but it doesn't have to be the same image as the database one.
I finally decided to take the approach of creating a config file with the script we want to run and then call this configMap from inside the volume.
this is a short explanation:
In my pod.yaml file there is a VolumeMount called "/pgconf" which is the directory that the docker image reads any SQL script that you put there and run it when the pod is starting.
And inside Volumes I will put the configMap name (postgres-init-script-configmap) which is the name of the config defined inside the configmap.yaml file.
There is no need to create the configMap using kubernetes,
The pod will take the configuration from the configMap file as long as you place it in the same directory as the pod.yaml .
my POD yaml file:
apiVersion: v1
kind: Pod
metadata:
name: "{{.Values.container.name.primary}}"
labels:
name: "{{.Values.container.name.primary}}"
spec:
securityContext:
fsGroup: 26
restartPolicy: {{default "Always" .Values.restartPolicy}}
containers:
- name: {{.Values.container.name.primary}}
image: "{{.Values.image.repository}}/{{.Values.image.container}}:{{.Values.image.tag}}"
ports:
- containerPort: {{.Values.container.port}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: primary
resources:
requests:
cpu: {{ .Values.resources.cpu }}
memory: {{ .Values.resources.memory }}
volumeMounts:
- mountPath: /pgconf
name: init-script
readOnly: true
volumes:
- name: init-script
configMap:
name: postgres-init-script-configmap
my configmap.yaml (Which contains the SQL script that will initial the DB):
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-script-configmap
data:
setup.sql: |-
CREATE USER david WITH PASSWORD 'david';