Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc
Related
I have a strange result from using nginx and IIS server together in single Kubernetes pod. It seems to be an issue with nginx.conf. If I bypass nginx and go directly to IIS, I see the standard landing page -
However when I try to go through the reverse proxy I see this partial result -
Here are the files:
nginx.conf:
events {
worker_connections 4096; ## Default: 1024
}
http{
server {
listen 81;
#Using variable to prevent nginx from checking hostname at startup, which leads to a container failure / restart loop, due to nginx starting faster than IIS server.
set $target "http://127.0.0.1:80/";
location / {
proxy_pass $target;
}
}
}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
...
name: ...
spec:
replicas: 1
selector:
matchLabels:
pod: ...
template:
metadata:
labels:
pod: ...
name: ...
spec:
containers:
- image: claudiubelu/nginx:1.15-1-windows-amd64-1809
name: nginx-reverse-proxy
volumeMounts:
- mountPath: "C:/usr/share/nginx/conf"
name: nginx-conf
imagePullPolicy: Always
- image: some-repo/proprietary-server-including-iis
name: ...
imagePullPolicy: Always
nodeSelector:
kubernetes.io/os: windows
imagePullSecrets:
- name: secret1
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
Mapping the nginx.conf file from a volume is just a convenient way to rapidly test different configs. New configs can be swapped in using kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/.
Busybox pod (used to access the PVC):
apiVersion: v1
kind: Pod
metadata:
name: nginx-busybox-pod
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "360000"
imagePullPolicy: Always
name: busybox
volumeMounts:
- name: nginx-conf
mountPath: "/mnt/nginx/conf"
restartPolicy: Always
volumes:
- name: nginx-conf
persistentVolumeClaim:
claimName: pvc-nginx
nodeSelector:
kubernetes.io/os: linux
And lastly the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nginx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: azurefile
Any ideas why?
Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.
I'm trying to deploy Drupal 7 in Kubernetes, It fails with an error Fatal error: require_once(): Failed opening required '/var/www/html/modules/system/system.install' (include_path='.:/usr/local/lib/php') in /var/www/html/includes/install.core.inc on line 241.
Here is K8S deployment manifest:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drupal-pvc
annotations:
pv.beta.kubernetes.io/gid: "drupal-gid"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: drupal
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: drupal
name: drupal
spec:
replicas: 1
template:
metadata:
labels:
app: drupal
spec:
initContainers:
- name: init-sites-volume
image: drupal:7.72
command: ['/bin/bash', '-c']
args: ['cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R']
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:7.72
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-pvc
However, when I remove the volumeMounts from the drupal container, it works. I need to use volumes in order to persist the website data, can any one suggest a fix?
Update: I have also added the manifest for the persistence volume.
check if you could write to mounted volume.
# kubectl exec -it drupal-zxxx -- sh
$ ls -alhtr /var/www/html/modules
$ cd /var/www/html/modules
$ touch test.txt
because storage configured with a group ID (GID) allows writing only by Pods using the same GID. Mismatched or missing GIDs cause permission denied errors.
alternatively you could try out an operator for drupal:
https://github.com/geerlingguy/drupal-operator
Also helm chart is another option:
https://bitnami.com/stack/drupal/helm
I want to change timezone with command.
I know applying hostpath.
Could you know how to apply command ?
ln -snf /user/share/zoneinfor/$TZ /etc/localtime
it works well within container.
But I don't know applying with command and arguments in yaml file.
You can change the timezone of your pod by using specific timezone config and hostPath volume to set specific timezone. Your yaml file will look something like:
apiVersion: v1
kind: Pod
metadata:
name: busybox-sleep
spec:
containers:
- name: busybox
image: busybox
args:
- sleep
- "1000000"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Prague
type: File
If you want it across all pods or deployments, you need to add volume and volumeMounts to all your deployment file and change the path value in hostPath section to the timezone you want to set.
Setting TZ environment variable as below works fine for me on GCP Kubernetes.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: demo
image: gcr.io/project/image:master
imagePullPolicy: Always
env:
- name: TZ
value: Europe/Warsaw
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 0
In a deployment, you can do it by creating a volumeMounts in /etc/localtime and setting its values. Here is an example I have for a mariadb:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
spec:
containers:
- name: mariadb
image: mariadb
ports:
- containerPort: 3306
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: password
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Europe/Madrid
In order to add "hostPath" in the deployment config, as suggested in previous answers, you'll need to be a privileged user. Otherwise your deployment may fail on:
"hostPath": hostPath volumes are not allowed to be used
As a workaround you can try one of theses options:
Add allowedHostPaths: {} next to volumes.
Add TZ environment variable. For example: TZ = Asia/Jerusalem
(Option 2 is similar to running docker exec -it openshift/origin /bin/bash -c "export TZ='Asia/Jerusalem' && /bin/bash").
For me: Setting up of volumes and volumeMounts didn't help. Setting up of TZ environment alone works in my case.
In a simple Postgres Deployment, I wish to choose the volume dependent on the namespace. The aim is to use the same Deployment configuration file to create Postgres deployments in different namespaces (e.g. production/staging).
What ways are there to achieve this?
Below my configuration file, I basically want to make MAKE_THIS_DEPENDENT_ON_NAMESPACE dependent on the environment (or namespace) this Deployment is used in.
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6
name: postgres
volumeMounts:
-name: postgres-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: MAKE_THIS_DEPENDENT_ON_NAMESPACE
You should try using a Persistent Volume Claim instead, PVCs are namespaced.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim