In a simple Postgres Deployment, I wish to choose the volume dependent on the namespace. The aim is to use the same Deployment configuration file to create Postgres deployments in different namespaces (e.g. production/staging).
What ways are there to achieve this?
Below my configuration file, I basically want to make MAKE_THIS_DEPENDENT_ON_NAMESPACE dependent on the environment (or namespace) this Deployment is used in.
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.6
name: postgres
volumeMounts:
-name: postgres-storage
mountPath: /var/lib/postgresql
volumes:
- name: postgres-persistent-storage
gcePersistentDisk:
pdName: MAKE_THIS_DEPENDENT_ON_NAMESPACE
You should try using a Persistent Volume Claim instead, PVCs are namespaced.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Related
Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.
I deploy a postgres database on k8s and glusterfs as volume.But every time I restart my pod all of data losses.Why is that?
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
namespace: gitlab
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.1
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres
env:
- name: POSTGRES_USERNAME
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_password
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: gitlab
key: postgres_db
volumes:
- name: postgres
glusterfs:
endpoints: glusterfs-cluster
path: gv
Define PVC and PV objects. see below for reference.
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10GB
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GB
Then bind the PVC to the pod as shown below
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-pv-claim
volumes:
- name: postgres-pv-claim
persistentVolumeClaim:
claimName: postgres-pv-claim
As per the Kubernetes Documentation https://kubernetes.io/docs/concepts/storage/volumes/#glusterfs, Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. I suggest to raise an issue at link https://github.com/kubernetes/kubernetes/issues/new/choose
If you want to install GitLab with PostgresSQL Backend, it will be easier to use below Helm Charts.
https://docs.gitlab.com/charts/
https://artifacthub.io/packages/helm/bitnami/postgresql
https://artifacthub.io/packages/helm/bitnami/postgresql-ha
You can do this:
1.For stateful set services such as databases, StatefulSet controllers should be used to deploy;
2.The storage data resources should be of a shared type, rather than using local volumes as storage, which may be scheduled to other nodes when creating POD objects;
I want mount azure disk to azure Kubernetes for PostgreSQL pod. My yml files
postgres-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 80Gi
storageClassName: manual
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
azureDisk:
kind: Managed
diskName: es-us-dev-core-test
diskURI: /subscriptions/id/resourceGroups/kubernetes_resources_group/providers/Microsoft.Compute/disks/dev-test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 80Gi
postgres-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: dev-test
POSTGRES_USER: admintpost
POSTGRES_PASSWORD: ada3dassasa
StatefulSet.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-statefulset
labels:
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
envFrom:
- configMapRef:
name: postgres-config
ports:
- containerPort: 5432
name: postgresdb
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
volumes:
- name: pv-data
persistentVolumeClaim:
claimName: postgres-pv-claim
Instruction for create disk https://learn.microsoft.com/en-us/azure/aks/azure-disk-volume
I get an error that it cannot connect to the disk, could you please tell me how to add Azure Disk to the pod.Thanks.
Create Azure Disk in the correct resource group
Looking at the file postgres-storage.yaml:
in spec.azureDisk.diskURI I see that you have created the disk in the resource group kubernetes_resources_group. However, you should create the disk inside a resource group whose name is something like this:
MC_kubernetes_resources_group_<your cluster name>_<region of the cluster>
Make sure that you create the disk in the same availability zone as your cluster.
Set caching mode to None
In the file postgres-storage.yaml:
set spec.azureDisk.cachingMode to None
Fix the definition of StatefulSet.yml
If you're using Azure Disks then in the file StatefulSet.yml:
in spec.template.spec you should replace the following:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
with the this:
volumeMounts:
- name: pv-data
mountPath: /var/lib/postgresql/data
subPath: pgdata
EDIT: fixed some mistakes in the last part.
I'm trying to deploy Drupal 7 in Kubernetes, It fails with an error Fatal error: require_once(): Failed opening required '/var/www/html/modules/system/system.install' (include_path='.:/usr/local/lib/php') in /var/www/html/includes/install.core.inc on line 241.
Here is K8S deployment manifest:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drupal-pvc
annotations:
pv.beta.kubernetes.io/gid: "drupal-gid"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: drupal
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: drupal
name: drupal
spec:
replicas: 1
template:
metadata:
labels:
app: drupal
spec:
initContainers:
- name: init-sites-volume
image: drupal:7.72
command: ['/bin/bash', '-c']
args: ['cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R']
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:7.72
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-pvc
However, when I remove the volumeMounts from the drupal container, it works. I need to use volumes in order to persist the website data, can any one suggest a fix?
Update: I have also added the manifest for the persistence volume.
check if you could write to mounted volume.
# kubectl exec -it drupal-zxxx -- sh
$ ls -alhtr /var/www/html/modules
$ cd /var/www/html/modules
$ touch test.txt
because storage configured with a group ID (GID) allows writing only by Pods using the same GID. Mismatched or missing GIDs cause permission denied errors.
alternatively you could try out an operator for drupal:
https://github.com/geerlingguy/drupal-operator
Also helm chart is another option:
https://bitnami.com/stack/drupal/helm
Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc