Mount / copy a file from host to Pod in kubernetes using minikube - kubernetes

I'm writing a kubectl configuration to start an image and copy a file to the container.
I need the file Config.yaml in the / so /Config.yaml needs to be a valid file.
I need that file in the Pod before it starts, so kubectl cp does not work.
I have the Config2.yaml in my local folder, and I'm starting the pod like:
kubectl apply -f pod.yml
Here follows my pod.yml file.
apiVersion: v1
kind: Pod
metadata:
name: python
spec:
containers:
- name: python
image: mypython
volumeMounts:
- name: config
mountPath: /Config.yaml
volumes:
- name: config
hostPath:
path: Config2.yaml
type: File
If I try to use like this it also fails:
- name: config-yaml
mountPath: /
subPath: Config.yaml
#readOnly: true

If you just need the information contained in the config.yaml to be present in the pod from the time it is created, use a configMap instead.
Create a configMap that contains all the data stored in the config.yaml and mount that into the correct path in the pod. This would not work for read/write, but works wonderfully for read-only data

You can try postStart lifecycle handler here to validate the file before pod starts.
Please refer here
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- mountPath: /config.yaml
name: config
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install yamllint -y && yamllint /config.yaml"]
volumes:
- name: config
hostPath:
path: /tmp/config.yaml
type: File
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
If config.yaml is invalid. Pod won't start.

Related

deploying wazuh-manager and replace ossec.conf after pods running - kubernetes

I'm deploying wazuh-manager on my kubernetes cluster and I need to disabled some security check features from the ossec.conf and I'm trying to copy the config-map ossec.conf(my setup) with the one from the wazuh-manager image but if I'm creating the "volume mount" on /var/ossec/etc/ossec.conf" it will delete everything from the /var/ossec/etc/(when wazuh-manager pods is deployed it will copy all files that this manager needs).
So, I'm thinking to create a new volume mount "/wazuh/ossec.conf" with "lifecycle poststart sleep > exec command "cp /wazuh/ossec.conf > /var/ossec/etc/ " but I'm getting an error that "cannot find /var/ossec/etc/".
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: wazuh-manager
labels:
node-type: master
spec:
replicas: 1
selector:
matchLabels:
appComponent: wazuh-manager
node-type: master
serviceName: wazuh
template:
metadata:
labels:
appComponent: wazuh-manager
node-type: master
name: wazuh-manager
spec:
volumes:
- name: ossec-conf
configMap:
name: ossec-config
containers:
- name: wazuh-manager
image: wazuh-manager4.8
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp /wazuh/ossec.conf >/var/ossec/etc/ossec.conf"]
resources:
securityContext:
capabilities:
add: ["SYS_CHROOT"]
volumeMounts:
- name: ossec-conf
mountPath: /wazuh/ossec.conf
subPath: master.conf
readOnly: true
ports:
- containerPort: 8855
name: registration
volumeClaimTemplates:
- metadata:
name: wazuh-disk
spec:
accessModes: ReadWriteOnce
storageClassName: wazuh-csi-disk
resources:
requests:
storage: 50
error:
$ kubectl get pods -n wazuh
wazuh-1670333556-0 0/1 PostStartHookError: command '/bin/sh -c cp /wazuh/ossec.conf > /var/ossec/etc/ossec.conf' exited with 1: /bin/sh: /var/ossec/etc/ossec.conf: No such file or directory...
Within the wazuh-kubernetes repository you have a file for each of the Wazuh manager cluster nodes:
wazuh/wazuh_managers/wazuh_conf/master.conf for the Wazuh Manager master node.
wazuh/wazuh_managers/wazuh_conf/worker.conf for the Wazuh Manager worker node.
With these files, in the Kustomization.yml script, configmaps are created:
configMapGenerator:
-name: indexer-conf
files:
- indexer_stack/wazuh-indexer/indexer_conf/opensearch.yml
- indexer_stack/wazuh-indexer/indexer_conf/internal_users.yml
-name: wazuh-conf
files:
-wazuh_managers/wazuh_conf/master.conf
-wazuh_managers/wazuh_conf/worker.conf
-name: dashboard-conf
files:
- indexer_stack/wazuh-dashboard/dashboard_conf/opensearch_dashboards.yml
Then, in the deployment manifest, they are mounted to persist the configurations in the ossec.conf file of each cluster node:
wazuh/wazuh_managers/wazuh-master-sts.yaml:
...
specification:
volumes:
-name:config
configMap:
name: wazuh-conf
...
volumeMounts:
-name:config
mountPath: /wazuh-config-mount/etc/ossec.conf
subPath: master.conf
...
It should be noted that the configuration files that you need to copy into the /var/ossec/ directory must be mounted on the /wazuh-config-mount/ directory and then the Wazuh Manager image entrypoint takes care of copying it to its location at the start of the container. As an example, the configmap is mounted to /wazuh-config-mount/etc/ossec.conf and then copied to /var/ossec/etc/ossec.conf at startup.

How can i remove an element in Deployment volumeMounts with kubectl Patch command?

I have a Deployment like this:
apiVersion: apps/v1
kind: Deployment
spec:
template:
volumeMounts:
- mountPath: /home
name: john-webos-vol
subPath: home
- mountPath: /pkg
name: john-vol
readOnly: true
subPath: school
I want to change the Deloyment with the kubectl patch command, so it has the following volumeMounts in the PodTemplate instead:
target.yaml:
apiVersion: apps/v1
kind: Deployment
spec:
template:
volumeMounts:
- mountPath: /home
name: john-webos-vol
subPath: home
I used the below command, but it didn't work.
kubectl patch deployment sample --patch "$(cat target.yaml)"
Can anyone give me some advice?
you can't do this with kubectl patch. The patch you did in your problem is called a strategic merge patch. the patch can't replace things, instead with this patch you can only add things.
like if you have intially one container in your podspec but you need to add another container. you can use patch here to add another container. but if you have two container and need to remove one you can't do this with this kind of patch.
if you want to this with patch you need to use retainKeys. Ref
let me explain how you can do this in another simple way. lets assume you have applied below test.yaml with
kubectl apply -f test.yaml
test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /home
name: john-webos-vol
subPath: home
- mountPath: /pkg
name: john-vol
readOnly: true
subPath: school
volumes:
- name: john-webos-vol
emptyDir: {}
- name: john-vol
emptyDir: {}
now you need update this one. and the updated one target.yaml will remove one of volume .
target.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /pkg
name: john-vol
readOnly: true
subPath: school
volumes:
- name: john-vol
emptyDir: {}
you can just use:
kubectl apply -f target.yaml
this one will update your deployment with new configuration
You can use JSON patch http://jsonpatch.com/
Remove specific volume mount
kubectl patch deployment <NAME> --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/volumeMounts/0"}]'
Replace volume mounts with what you need
kubectl patch deployment <NAME> --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/volumeMounts", "value": [{"mountPath": "/home", "name": "john-webos-vol", "subPath": "home"}]}]'
Kubectl cheet sheet for more info: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#patching-resources
You can leverage the apply command, by getting the deployment definition in JSON format, modifying (in your case removing) this section
- mountPath: /pkg
name: john-vol
readOnly: true
subPath: school
with sed or a simmilar utility and then applying it back:
kubectl get deployment <myDeployment> -n <myNamespace> | sed -z -s -E -b -e 's/REGEX_TO_MATCH_PART_OF_DEPLOYMENT_TO_REMOVE//g' | kubectl apply -f -

k8s initContainer mountPath does not exist after kubectl pod deployment

Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519

can i use a configmap created from an init container in the pod

I am trying to "pass" a value from the init container to a container. Since values in a configmap are shared across the namespace, I figured I can use it for this purpose. Here is my job.yaml (with faked-out info):
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['kubectl', 'create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url']
restartPolicy: Never
backoffLimit: 0
This does not seem to work (EDIT: although the statements following this edit note may still be correct, this is not working because kubectl is not a recognizable command in the busybox image), and I am assuming that the pod can only read values from a configmap created BEFORE the pod is created. Has anyone else come across the difficulty of passing values between containers, and what did you do to solve this?
Should I deploy the configmap in another pod and wait to deploy this one until the configmap exists?
(I know I can write files to a volume, but I'd rather not go that route unless it's absolutely necessary, since it essentially means our docker images must be coupled to an environment where some specific files exist)
You can create an EmptyDir volume, and mount this volume onto both containers. Unlike persistent volume, EmptyDir has no portability issue.
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['/bin/sh', '-c', 'cp x /tmp/artifact/x']
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
restartPolicy: Never
volumes:
- name: tmp
emptyDir: {}
backoffLimit: 0
If for various reasons, you don't want to use share volume. And you want to create a configmap or a secret, here is a solution.
First you need to use a docker image which contains kubectl : gcr.io/cloud-builders/kubectl:latest for example. (docker image which contains kubectl manage by Google).
Then this (init)container needs enough rights to create resource on Kubernetes cluster. Ok by default, kubernetes inject a token of default service account named : "default" in container, but I prefer to make more explicit, then add this line :
...
initContainers:
- # Already true by default but if use it, prefer to make it explicit
automountServiceAccountToken: true
name: artifactory-snapshot
And add "edit" role to "default" service account:
kubectl create rolebinding default-edit-rb --clusterrole=edit --serviceaccount=default:myapp --namespace=default
Then complete example :
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
initContainers:
- # Already true by default but if use it, prefer to make it explicit.
automountServiceAccountToken: true
name: artifactory-snapshot
# You need to use docker image which contains kubectl
image: gcr.io/cloud-builders/kubectl:latest
command:
- sh
- -c
# the "--dry-run -o yaml | kubectl apply -f -" is to make command idempotent
- kubectl create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url --dry-run -o yaml | kubectl apply -f -
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
First of all, kubectl is a binary. It was downloaded in your machine before you could use the command. But, In your POD, the kubectl binary doesn't exist. So, you can't use kubectl command from a busybox image.
Furthermore, kubectl uses some credential that is saved in your machine (probably in ~/.kube path). So, If you try to use kubectl from inside an image, this will fail because of missing credentials.
For your scenario, I will suggest the same as #ccshih, use volume sharing.
Here is the official doc about volume sharing between init-container and container.
The yaml that is used here is ,
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
Here init-containers saves a file in the volume and later the file was available in inside the container. Try the tutorial by yourself for better understanding.