I have created a deployment and I wanted to mount the host path to the container, and when I check the container I see only empty folder.
Why am I getting this error? What can be the cause?
EDIT: I am using Windows OS.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservicepod6
labels:
app: servicepod
spec:
replicas: 1
selector:
matchLabels:
app: servicepod
template:
metadata:
labels:
app: servicepod
spec:
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: hostvolume
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/objectmanagement/deployments/src/*
EDIT FOR THE ANSWER -
I start the minkube - minikube start --mount-string="$HOME/test/src/code/file:/data"
Then I changed the deployment file like below
Showing only volume part
spec:
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/deployments/src
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /test/src/code/file
When I log into the pod and went to the directory (/test/src/code/file) I found the directory empty
let me know what am I missing?
After a detailed search and hit and trial method - Found the way
Only for minikube:-
First we need to mount the host folder into the directory name:
minikube mount src/:/var/www/html
Then we need to define hostPath and mountPath as
/var/www/html
Because now we have mount the folder to html folder.
volumes:
- name: hostvolume
hostPath:
path: /var/www/html
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
workingDir: /var/www/html
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /var/www/html
Related
I want to display the pod IP address in an nginx pod. Currently I am using an init container to initialize the pod by writing to a volume.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox:1.28
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- echo
- $(POD_IP) >> /work-dir/index.html
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
This should work in theory, but the file redirect doesn't work and the mounted file in the nginx container is blank. There's probably an easier way to do this, but I'm curious why this doesn't work.
Nothing is changed, except how command is passed in the init container. See this for an explanation.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox:1.28
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- 'sh'
- '-c'
- 'echo $(POD_IP) > /work-dir/index.html'
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
I started using K3S, so I'm an absolute noob. Now I'm wondering how I can create the .yaml Files for pods by my own or use a docker image. (Couldn't find detailed infos about that)
I want a OpenVPN or any other suggested VPN Server running, so I can access my home devices from anywhere. It would safe a lot of headache and time, if someone could be so nice and help me a little.
Before, I've had a OpenVPN Server running, when I only had 1 Raspi. But it looks like everything from the install to the config changed with my k3s Kubernetes Cluster.
How I Made my k3s Cluster with Rancher: https://youtu.be/X9fSMGkjtug
Tried for 3hrs to figure it out, found no real step by step guide for beginners...
I already have a Cloudflare ddns script running to update my Domain with correct IP.
Thank you very much!
here is ther example of Open VPN client YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: openvpn-client
spec:
selector:
matchLabels:
app: openvpn-client
vpn: vpn-id
replicas: 1
template:
metadata:
labels:
app: openvpn-client
vpn: vpn-id
spec:
volumes:
- name: vpn-config
secret:
secretName: vpn-config
items:
- key: client.ovpn
path: client.ovpn
- name: vpn-auth
secret:
secretName: vpn-auth
items:
- key: auth.txt
path: auth.txt
- name: route-script
configMap:
name: route-script
items:
- key: route-override.sh
path: route-override.sh
- name: tmp
emptyDir: {}
initContainers:
- name: vpn-route-init
image: busybox
command: ['/bin/sh', '-c', 'cp /vpn/route-override.sh /tmp/route/route-override.sh; chown root:root /tmp/route/route-override.sh; chmod o+x /tmp/route/route-override.sh;']
volumeMounts:
- name: tmp
mountPath: /tmp/route
- name: route-script
mountPath: /vpn/route-override.sh
subPath: route-override.sh
containers:
- name: vpn
image: dperson/openvpn-client
command: ["/bin/sh","-c"]
args: ["openvpn --config 'vpn/client.ovpn' --auth-user-pass 'vpn/auth.txt' --script-security 3 --route-up /tmp/route/route-override.sh;"]
stdin: true
tty: true
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
env:
- name: TZ
value: "Turkey"
volumeMounts:
- name: vpn-config
mountPath: /vpn/client.ovpn
subPath: client.ovpn
- name: vpn-auth
mountPath: /vpn/auth.txt
subPath: auth.txt
- name: tmp
mountPath: /tmp/route
- name: app1
image: python:3.6-stretch
command:
- sleep
- "100000"
tty: true
dnsConfig:
nameservers:
- 8.8.8.8
- 8.8.4.4
you can also read more about the deployment :https://bugraoz93.medium.com/openvpn-client-in-a-pod-kubernetes-d3345c66b014
You can also use the HELM Chart for same which will make easy to setup anything on Kubernetes via pre-made YAML scripts : https://itnext.io/use-helm-to-deploy-openvpn-in-kubernetes-to-access-pods-and-services-217dec344f13
Docker Open VPN : https://github.com/dperson/openvpn-client
The k8s docker container mounts the host, but fails to output log files to the host. Can you tell me the reason?
kubernets yaml like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: test
spec:
replicas: 1
template:
spec:
containers:
- name: db
image: postgres:11.0-alpine
command:
- "docker-entrypoint.sh"
- "postgres"
- "-c"
- "logging_collector=on"
- "-c"
- "log_directory=/var/lib/postgresql/log"
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: log-fs
mountPath: /var/lib/postgresql/log
volumes:
- name: log-fs
hostPath:
path: /var/log
I'm trying to deploy RabbitMQ on the Kubernetes cluster and using the initcontainer to copy a file from ConfigMap. However, the file is not copying after POD is in a running state.
Initially, I have tried without using an initcontainer, but I was getting an error like "touch: cannot touch '/etc/rabbitmq/rabbitmq.conf': Read-only file system."
kind: Deployment
metadata:
name: broker01
namespace: s2sdocker
labels:
app: broker01
spec:
replicas: 1
selector:
matchLabels:
app: broker01
template:
metadata:
name: broker01
labels:
app: broker01
spec:
initContainers:
- name: configmap-copy
image: busybox
command: ['/bin/sh', '-c', 'cp /etc/rabbitmq/files/definitions.json /etc/rabbitmq/']
volumeMounts:
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
- name: pre-install
mountPath: /etc/rabbitmq
containers:
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
volumes:
- name: pre-install
emptyDir: {}
- name: broker01-data
persistentVolumeClaim:
claimName: broker01-data-pvc
- name: broker01-log
persistentVolumeClaim:
claimName: broker01-log-pvc
- name: broker01-definitions
configMap:
name: broker01-definitions-cm
The file "definitions.json" should be copied to /etc/reabbitmq folder. I have followed "Kubernetes deployment read-only filesystem error". But issue did not fix.
After making changes in the "containers volumeMount section," I was able to copy the file on to /etc/rabbitmq folder.
Please find a modified code here.
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: pre-install
mountPath: /etc/rabbitmq
can you check permissions on /etc/rabbitmq/.
does the user has permission to copy the file to above location?
- name: pre-install
mountPath: /etc/rabbitmq
I see that /etc/rabbitmq is a mount point. it is a ready only file system and hence the file copy is failed.
can you update the permissions on 'pre-install' mount point
Trying to mount config files from a hostPath to a kubernetes container. This works using minikube and VirtualBox shared folder, but I am unable to make this work on Linux.
I making use of AWS EKS and the following architecture https://aws.amazon.com/quickstart/architecture/amazon-eks/. I think my problem is that the files need to live on each of the EKS Node instances.
Here is the architecture diagram:
Below is the Deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5.rc
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
After much pain I found that I am trying to place the configuration on the Linux Bastion host where I have access to kubectl but in fact this configuration will have to be on each of the EC2 instances in every availability zone.
The solution for me was to make use of a initContainer.
apiVersion: apps/v1
kind: Deployment
metadata:
name: openhim-core-deployment
spec:
replicas: 1
selector:
matchLabels:
component: openhim-core
template:
metadata:
labels:
component: openhim-core
spec:
volumes:
- name: core-config
hostPath:
path: /var/config/openhim-core
containers:
- name: openhim-core
image: jembi/openhim-core:5
ports:
- containerPort: 8080
- containerPort: 5000
- containerPort: 5001
volumeMounts:
- name: core-config
mountPath: /usr/src/app/config
env:
- name: NODE_ENV
value: development
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/usr/src/app/config/development.json"
- https://s3.eu-central-1.amazonaws.com/../development.json
volumeMounts:
- name: core-config
mountPath: "/usr/src/app/config"
volumes:
- name: core-config
emptyDir: {}