I have a single node Kubernetes cluster. I want the pod I make to have access to /mnt/galahad on my local computer (which is the host for the cluster).
Here is my Kubernetes config yaml:
apiVersion: v1
kind: Pod
metadata:
name: galahad-test-distributor
namespace: galahad-test
spec:
volumes:
- name: place-for-stuff
hostPath:
path: /mnt/galahad
containers:
- name: galahad-test-distributor
image: vergilkilla/distributor:v9
volumeMounts:
- name: place-for-stuff
mountPath: /mnt
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
I start my pod like such:
kubectl apply -f ./create-distributor.yaml -n galahad-test
I get a terminal into my newly-made pod:
kubectl exec -it galahad-test-distributor -n galahad-test -- /bin/bash
I go to /mnt in my pod and it doesn't have anything from /mnt/galahad. I make a new file in the host /mnt/galahad folder - doesn't reflect in the pod. How do I achieve this functionality to have the host path files/etc. reflect in the pod? Is this possible in the somewhat straightforward way I am trying here (defining it per-pod definition without creating separate PersistentVolumes and PersistentVolumeRequests)?
Your yaml file looks good.
Using this configuration:
apiVersion: v1
kind: Pod
metadata:
name: galahad-test-distributor
namespace: galahad-test
spec:
volumes:
- name: place-for-stuff
hostPath:
path: /mnt/galahad
containers:
- name: galahad-test-distributor
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
volumeMounts:
- name: place-for-stuff
mountPath: /mnt
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
I ran this and everything worked as expected:
>>> kubectl apply -f create-distributor.yaml # side node: you don't need
# to specify the namespace here
# since it's inside the yaml file
pod/galahad-test-distributor created
>>> touch /mnt/galahad/file
>>> kubectl -n galahad-test exec galahad-test-distributor ls /mnt
file
Are you sure you are adding your files in the right place? For instance, if you are running your cluster inside a VM (e.g. minikube), make sure you are adding the files inside the VM, not on the machine hosting the VM.
Related
I'm deploying wazuh-manager on my kubernetes cluster and I need to disabled some security check features from the ossec.conf and I'm trying to copy the config-map ossec.conf(my setup) with the one from the wazuh-manager image but if I'm creating the "volume mount" on /var/ossec/etc/ossec.conf" it will delete everything from the /var/ossec/etc/(when wazuh-manager pods is deployed it will copy all files that this manager needs).
So, I'm thinking to create a new volume mount "/wazuh/ossec.conf" with "lifecycle poststart sleep > exec command "cp /wazuh/ossec.conf > /var/ossec/etc/ " but I'm getting an error that "cannot find /var/ossec/etc/".
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: wazuh-manager
labels:
node-type: master
spec:
replicas: 1
selector:
matchLabels:
appComponent: wazuh-manager
node-type: master
serviceName: wazuh
template:
metadata:
labels:
appComponent: wazuh-manager
node-type: master
name: wazuh-manager
spec:
volumes:
- name: ossec-conf
configMap:
name: ossec-config
containers:
- name: wazuh-manager
image: wazuh-manager4.8
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cp /wazuh/ossec.conf >/var/ossec/etc/ossec.conf"]
resources:
securityContext:
capabilities:
add: ["SYS_CHROOT"]
volumeMounts:
- name: ossec-conf
mountPath: /wazuh/ossec.conf
subPath: master.conf
readOnly: true
ports:
- containerPort: 8855
name: registration
volumeClaimTemplates:
- metadata:
name: wazuh-disk
spec:
accessModes: ReadWriteOnce
storageClassName: wazuh-csi-disk
resources:
requests:
storage: 50
error:
$ kubectl get pods -n wazuh
wazuh-1670333556-0 0/1 PostStartHookError: command '/bin/sh -c cp /wazuh/ossec.conf > /var/ossec/etc/ossec.conf' exited with 1: /bin/sh: /var/ossec/etc/ossec.conf: No such file or directory...
Within the wazuh-kubernetes repository you have a file for each of the Wazuh manager cluster nodes:
wazuh/wazuh_managers/wazuh_conf/master.conf for the Wazuh Manager master node.
wazuh/wazuh_managers/wazuh_conf/worker.conf for the Wazuh Manager worker node.
With these files, in the Kustomization.yml script, configmaps are created:
configMapGenerator:
-name: indexer-conf
files:
- indexer_stack/wazuh-indexer/indexer_conf/opensearch.yml
- indexer_stack/wazuh-indexer/indexer_conf/internal_users.yml
-name: wazuh-conf
files:
-wazuh_managers/wazuh_conf/master.conf
-wazuh_managers/wazuh_conf/worker.conf
-name: dashboard-conf
files:
- indexer_stack/wazuh-dashboard/dashboard_conf/opensearch_dashboards.yml
Then, in the deployment manifest, they are mounted to persist the configurations in the ossec.conf file of each cluster node:
wazuh/wazuh_managers/wazuh-master-sts.yaml:
...
specification:
volumes:
-name:config
configMap:
name: wazuh-conf
...
volumeMounts:
-name:config
mountPath: /wazuh-config-mount/etc/ossec.conf
subPath: master.conf
...
It should be noted that the configuration files that you need to copy into the /var/ossec/ directory must be mounted on the /wazuh-config-mount/ directory and then the Wazuh Manager image entrypoint takes care of copying it to its location at the start of the container. As an example, the configmap is mounted to /wazuh-config-mount/etc/ossec.conf and then copied to /var/ossec/etc/ossec.conf at startup.
I am getting an error while adding NFS in the Kubernetes cluster. I was able to mount the NFS but not able to add a file or directory in the mount location.
This is my yaml file
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
volumes:
- name: nfs-volume
nfs:
server: 10.01.26.81
path: /nfs_data/nfs_share_home_test/testuser
containers:
- name: app
image: alpine
volumeMounts:
- name: nfs-volume
mountPath: /home/kube/testuser
Then I ran the following commands for building the pod
kubectl apply -f session.yaml
kubectl exec -it pod-using-nfs sh
After I exec to the pod,
/ # cd home/kube/testuser/
/home/kube/testuser# touch hello
touch: hello: Read-only file system
Expected output is
/ # cd home/kube/testuser/
/home/kube/testuser# touch hello
/home/kube/testuser# ls
hello
Should I need to add any securityContext to the yaml for fixing this?
Any help would be appreciated!
I'm writing a kubectl configuration to start an image and copy a file to the container.
I need the file Config.yaml in the / so /Config.yaml needs to be a valid file.
I need that file in the Pod before it starts, so kubectl cp does not work.
I have the Config2.yaml in my local folder, and I'm starting the pod like:
kubectl apply -f pod.yml
Here follows my pod.yml file.
apiVersion: v1
kind: Pod
metadata:
name: python
spec:
containers:
- name: python
image: mypython
volumeMounts:
- name: config
mountPath: /Config.yaml
volumes:
- name: config
hostPath:
path: Config2.yaml
type: File
If I try to use like this it also fails:
- name: config-yaml
mountPath: /
subPath: Config.yaml
#readOnly: true
If you just need the information contained in the config.yaml to be present in the pod from the time it is created, use a configMap instead.
Create a configMap that contains all the data stored in the config.yaml and mount that into the correct path in the pod. This would not work for read/write, but works wonderfully for read-only data
You can try postStart lifecycle handler here to validate the file before pod starts.
Please refer here
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- mountPath: /config.yaml
name: config
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install yamllint -y && yamllint /config.yaml"]
volumes:
- name: config
hostPath:
path: /tmp/config.yaml
type: File
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
If config.yaml is invalid. Pod won't start.
From this article, I can specify 'userspace' as my proxy-mode, but I am unable to understand what command I need to use for it and at what stage? Like after creating deployment or service?
I am running a minikube cluster currently.
kube-proxy is a process that runs on each kubernetes node to manage network connections coming into and out of kubernetes.
You don't run the command as such, but your deployment method (usually kubeadm) configures the options for it to run.
As #Hang Du mentioned, in minikube you can modify it's options by editing the kube-proxy configmap and changing mode to userspace
kubectl -n kube-system edit configmap kube-proxy
Then delete the Pod.
kubectl -n kube-system get pod
kubectl -n kube-system delete pod kube-proxy-XXXXX
If you are using minikube, you can find a DaemonSet named kube-proxy like followings:
$ kubectl get ds -n kube-system kube-proxy -o yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
...
labels:
k8s-app: kube-proxy
name: kube-proxy
namespace: kube-system
...
spec:
...
spec:
containers:
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: k8s.gcr.io/kube-proxy:v1.15.0
imagePullPolicy: IfNotPresent
name: kube-proxy
...
volumeMounts:
- mountPath: /var/lib/kube-proxy
name: kube-proxy
- mountPath: /run/xtables.lock
name: xtables-lock
- mountPath: /lib/modules
name: lib-modules
readOnly: true
dnsPolicy: ClusterFirst
...
volumes:
- configMap:
defaultMode: 420
name: kube-proxy
name: kube-proxy
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- hostPath:
path: /lib/modules
type: ""
name: lib-modules
...
Look at the .spec.template.spec.containers[].command, the container runs the kube-proxy command. You can provide the flag --proxy-mode=userspace in the command array.
- command:
- /usr/local/bin/kube-proxy
- --config=/var/lib/kube-proxy/config.conf
- --hostname-override=$(NODE_NAME)
- --proxy-mode=userspace
With manually installed Kubernetes on CoreOS, how does one install and use the Kubernetes addon manager?
I've found references to the addon manager being the current standard way of installing Kubernetes addons, but I can't find any authoritative documentation on it. Hoping someone can help me out here.
The addon manager is deployed as a normal pod or a deployment, with a simple kubectl apply -f.
The yaml looks something like this, look at the specific version that you need:
apiVersion: v1
kind: Pod
metadata:
name: kube-addon-manager
namespace: kube-system
labels:
component: kube-addon-manager
spec:
hostNetwork: true
containers:
- name: kube-addon-manager
# When updating version also bump it in:
# - cluster/images/hyperkube/static-pods/addon-manager-singlenode.json
# - cluster/images/hyperkube/static-pods/addon-manager-multinode.json
# - test/kubemark/resources/manifests/kube-addon-manager.yaml
image: gcr.io/google-containers/kube-addon-manager:v6.4-beta.1
command:
- /bin/bash
- -c
- /opt/kube-addons.sh 1>>/var/log/kube-addon-manager.log 2>&1
resources:
requests:
cpu: 5m
memory: 50Mi
volumeMounts:
- mountPath: /etc/kubernetes/
name: addons
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: false
volumes:
- hostPath:
path: /etc/kubernetes/
name: addons
- hostPath:
path: /var/log
name: varlog
The addon manager observes the specific yaml files under /etc/kubernetes/addons/, put any addon you like here to install it.