I have an issue on my deployment when I try to share a folder with a kubernetes volume.
The folder will be shared using an Azure File Storage.
If I deploy my image without sharing the folder (/integrations) the app start.
as shown in the image below the pod via lens is up and running
If I add the relation of the folder to a volume the result is that the pod will stuck in error with this messagge
Here I put my yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-widget
labels:
app: sandbox-pizzly-widget
product: sandbox-pizzly
app.kubernetes.io/name: "sandbox-pizzly-widget"
app.kubernetes.io/version: "latest"
app.kubernetes.io/managed-by: "xxxx"
app.kubernetes.io/component: "sandbox-pizzly-widget"
app.kubernetes.io/part-of: "sandbox-pizzly"
spec:
replicas: 1
selector:
matchLabels:
app: sandbox-pizzly-widget
template:
metadata:
labels:
app: sandbox-pizzly-widget
spec:
containers:
- name: sandbox-pizzly-widget
image: davidep931/pizzly-proxy:latest
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: "production"
- name: DASHBOARD_USERNAME
value: "admin"
- name: DASHBOARD_PASSWORD
value: "admin"
- name: SECRET_KEY
value: "devSecretKey"
- name: PUBLISHABLE_KEY
value: "devPubKey"
- name: PROXY_USES_SECRET_KEY_ONLY
value: "FALSE"
- name: COOKIE_SECRET
value: "devCookieSecret"
- name: AUTH_CALLBACK_URL
value: "https://pizzly.mydomain/auth/callback"
- name: DB_HOST
value: "10.x.x.x"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "username"
- name: DB_PASSWORD
value: "password"
- name: PORT
value: "8080"
volumeMounts:
- mountPath: "/home/node/app/integrations"
name: pizzlystorage
resources:
requests:
memory: "100Mi"
cpu: "50m"
limits:
cpu: "75m"
memory: "200Mi"
---
apiVersion: v1
kind: Service
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-widget
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: sandbox-pizzly-widget
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: sandbox-pizzly-pv-volume
labels:
type: local
app: products
spec:
storageClassName: azurefile
capacity:
storage: 1Gi
azureFile:
secretName: azure-secret
shareName: sandbox-pizzly-pv
readOnly: false
secretNamespace: sandbox-pizzly
accessModes:
- ReadWriteMany
claimRef:
namespace: sandbox-pizzly
name: sandbox-pizzly-pv-claim
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: sandbox-pizzly
name: sandbox-pizzly-pv-claim
labels:
app: products
spec:
storageClassName: azurefile
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azurefilestorage
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: persistentsapizzly
reclaimPolicy: Retain
---
apiVersion: v1
kind: Secret
metadata:
name: azure-secret
namespace: sandbox-pizzly
type: Opaque
data:
azurestorageaccountname: xxxxxxxxxxxxxxxxxxxxx
azurestorageaccountkey: xxxxxxxxxxxxxxxxxxxxxxxxxxx
If I try, in the few seconds before the pod stuck, to access to integrations folder and I perform a touch 'test.txt', I will found that file in the Azure File Storage.
Here what I see few seconds before shell autoclose due to CrashLoopBack
I add the Dockerfile:
FROM node:14-slim
WORKDIR /app
# Copy in dependencies for building
COPY *.json ./
COPY yarn.lock ./
# COPY config ./config
COPY integrations ./integrations/
COPY src ./src/
COPY tests ./tests/
COPY views ./views/
RUN yarn install
# Actual image to run from.
FROM node:14-slim
# Make sure we have ca certs for TLS
RUN apt-get update && apt-get install -y \
curl \
wget \
gnupg2 ca-certificates libnss3 \
git
# Make a directory for the node user. Not running Pizzly as root.
RUN mkdir /home/node/app && chown -R node:node /home/node/app
WORKDIR /home/node/app
USER node
# Startup script
COPY --chown=node:node ./startup.sh ./startup.sh
RUN chmod +x ./startup.sh
# COPY from first container
COPY --chown=node:node --from=0 /app/package.json ./package.json
COPY --chown=node:node --from=0 /app/dist/ .
COPY --chown=node:node --from=0 /app/views ./views
COPY --chown=node:node --from=0 /app/node_modules ./node_modules
# Run the startup script
CMD ./startup.sh
Here the startup.sh script:
#!/bin/sh
# Docker Startup script
# Apply migration
./node_modules/.bin/knex --cwd ./src/lib/database/config migrate:latest
# Start App
node ./src/index.js
Have you got any idea on what I miss or I'm wrong?
Thank you,
Dave.
Well, there are two things I think you need to know when you mount the Azure file to the pods existing folder as the volume:
it will cover the existing files
the mount path will set the ownership as the root user
So the above means if your application will start depends on the existing files, then it will cause the problem. And if your application uses a non-root use, for example, the user app, then it maybe will also cause the problem. Here I guess the problem may be caused by the first limitation.
Related
I have kubernetes cluster with two replicas of a PostgreSQL database in it, and I wanted to see the values stored in the database.
When I exec myself into one of the two postgres pod (kubectl exec --stdin --tty [postgres_pod] -- /bin/bash) and check the database from within, I have only a partial part of the DB. The rest of the DB data is on the other Postgres pod, and I don't see any directory created by the persistent volumes with all the database stored.
So in short I create 4 tables; in one postgres pod I have 4 tables but 2 are empty, in the other postgres pod there are 3 tables and the tables that were empty in the first pod, here are filled with data.
Why the pods don't have the same data in it?
How can I access and download the entire database?
PS. I deploy the cluster using HELM in minikube.
Here are the YAML files:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: database-pg
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGDATA: /data/pgdata
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- name: postgres
port: 5432
nodePort: 30432
type: NodePort
selector:
app: postgres
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres-service
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
volumeMounts:
- name: postgres-disk
mountPath: /data
# Config from ConfigMap
envFrom:
- configMapRef:
name: postgres-config
volumeClaimTemplates:
- metadata:
name: postgres-disk
spec:
accessModes: ["ReadWriteOnce"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 2
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2
imagePullPolicy: IfNotPresent
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
I found a solution to my problem of downloading the volume directory, however when I run multiple replicasets of postgres, the tables of the DB are still scattered between the pods.
Here's what I did to download the postgres volume:
First of all, minikube supports some specific directories for volume appear:
minikube is configured to persist files stored under the following directories, which are made in the Minikube VM (or on your localhost if running on bare metal). You may lose data from other directories on reboots.
/data
/var/lib/minikube
/var/lib/docker
/tmp/hostpath_pv
/tmp/hostpath-provisioner
So I've changed the mount path to be under the /data directory. This made the database volume visible.
After this I ssh'ed into minikube and copied the database volume to a new directory (I used /home/docker as the user of minikube is docker).
sudo cp -R /data/pgdata /home/docker
The volume pgdata was still owned by root (access denied error) so I changed it to be owned by docker. For this I also set a new password which I knew:
sudo passwd docker # change password for docker user
sudo chown -R docker: /home/docker/pgdata # change owner from root to docker
Then you can exit and copy the directory into you local machine:
exit
scp -r $(minikube ssh-key) docker#$(minikube ip):/home/docker/pgdata [your_local_path].
NOTE
Mario's advice, which is to use pgdump is probably a better solution to copy a database. I still wanted to download the volume directory to see if it has the full database, when the pods have only a part of all the tables. In the end it turned out it doesn't.
I'm running a Ubuntu container with SQL Server in my local Kubernetes environment with Docker Desktop on a Windows laptop.
Now I'm trying to mount a local folder (C:\data\sql) that contains database files into the pod.
For this, I configured a persistent volume and persistent volume claim in Kubernetes, but it doesn't seem to mount correctly. I don't see errors or anything, but when I go into the container using docker exec -it and inspect the data folder, it's empty. I expect the files from the local folder to appear in the mounted folder 'data', but that's not the case.
Is something wrongly configured in the PV, PVC or pod?
Here are my yaml files:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dev-customer-db-pv
labels:
type: local
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /C/data/sql
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dev-customer-db-pvc
labels:
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-customer-db
labels:
ufo: dev-customer-db-config
app: customer-db
chart: customer-db-0.1.0
release: dev
heritage: Helm
spec:
selector:
matchLabels:
app: customer-db
release: dev
replicas: 1
template:
metadata:
labels:
app: customer-db
release: dev
spec:
volumes:
- name: dev-customer-db-pv
persistentVolumeClaim:
claimName: dev-customer-db-pvc
containers:
- name: customer-db
image: "mcr.microsoft.com/mssql/server:2019-latest"
imagePullPolicy: IfNotPresent
volumeMounts:
- name: dev-customer-db-pv
mountPath: /data
envFrom:
- configMapRef:
name: dev-customer-db-config
- secretRef:
name: dev-customer-db-secrets
At first, I was trying to define a volume in the pod without PV and PVC, but then I got access denied errors when I tried to read files from the mounted data folder.
spec:
volumes:
- name: dev-customer-db-data
hostPath:
path: C/data/sql
containers:
...
volumeMounts:
- name: dev-customer-db-data
mountPath: data
I've also tried to Helm install with --set volumePermissions.enabled=true but this didn't solve the access denied errors.
Based on this info from GitHub for Docker there is no support hostpath volumes in WSL 2.
Thus, next workaround can be used.
We need just to append /run/desktop/mnt/host to the initial path on the host /c/data/sql. No need for PersistentVolume and PersistentVolumeClaim in this case - just remove them.
I changed spec.volumes for Deployment according to information about hostPath configuration on Kubernetes site:
volumes:
- name: dev-customer-db-pv
hostPath:
path: /run/desktop/mnt/host/c/data/sql
type: Directory
After applying these changes, the files can be found in data folder in the pod, since mountPath: /data
I'm trying to deploy Drupal 7 in Kubernetes, It fails with an error Fatal error: require_once(): Failed opening required '/var/www/html/modules/system/system.install' (include_path='.:/usr/local/lib/php') in /var/www/html/includes/install.core.inc on line 241.
Here is K8S deployment manifest:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: drupal-pvc
annotations:
pv.beta.kubernetes.io/gid: "drupal-gid"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
name: drupal-service
spec:
ports:
- name: http
port: 80
protocol: TCP
selector:
app: drupal
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: drupal
name: drupal
spec:
replicas: 1
template:
metadata:
labels:
app: drupal
spec:
initContainers:
- name: init-sites-volume
image: drupal:7.72
command: ['/bin/bash', '-c']
args: ['cp -r /var/www/html/sites/ /data/; chown www-data:www-data /data/ -R']
volumeMounts:
- mountPath: /data
name: vol-drupal
containers:
- image: drupal:7.72
name: drupal
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html/modules
name: vol-drupal
subPath: modules
- mountPath: /var/www/html/profiles
name: vol-drupal
subPath: profiles
- mountPath: /var/www/html/sites
name: vol-drupal
subPath: sites
- mountPath: /var/www/html/themes
name: vol-drupal
subPath: themes
volumes:
- name: vol-drupal
persistentVolumeClaim:
claimName: drupal-pvc
However, when I remove the volumeMounts from the drupal container, it works. I need to use volumes in order to persist the website data, can any one suggest a fix?
Update: I have also added the manifest for the persistence volume.
check if you could write to mounted volume.
# kubectl exec -it drupal-zxxx -- sh
$ ls -alhtr /var/www/html/modules
$ cd /var/www/html/modules
$ touch test.txt
because storage configured with a group ID (GID) allows writing only by Pods using the same GID. Mismatched or missing GIDs cause permission denied errors.
alternatively you could try out an operator for drupal:
https://github.com/geerlingguy/drupal-operator
Also helm chart is another option:
https://bitnami.com/stack/drupal/helm
For some context, I'm trying to build a staging / testing system on kubernetes which starts with deploying a mariadb on the cluster with some schema and data. I have a trunkated / clensed db dump from prod to help me with that. Let's call that file : dbdump.sql which is present in my local box in the path /home/rjosh/database/script/ . After much reasearch here is what my yaml file looks like:
apiVersion: v1
kind: PersistentVolume
metadata:
name: m3ma-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: m3ma-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
---
apiVersion: v1
kind: Service
metadata:
name: m3ma
spec:
ports:
- port: 3306
selector:
app: m3ma
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: m3ma
spec:
selector:
matchLabels:
app: m3ma
strategy:
type: Recreate
template:
metadata:
labels:
app: m3ma
spec:
containers:
- image: mariadb:10.2
name: m3ma
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: m3ma
volumeMounts:
- name: m3ma-persistent-storage
mountPath: /var/lib/mysql/
- name: m3ma-host-path
mountPath: /docker-entrypoint-initdb.d/
volumes:
- name: m3ma-persistent-storage
persistentVolumeClaim:
claimName: m3ma-pv-claim
- name: m3ma-host-path
hostPath:
path: /home/smaikap/database/script/
type: Directory
The MariaDB instance is coming up but not with the schema and data that is present in /home/rjosh/database/script/dbdump.sql.
Basically, the mount is not working. If I connect to the pod and check /docker-entrypoint-initdb.d/ there is nothing. How do I go about this?
A bit more details. Currently, I'm testing it on minikube. But, soon it will have to work on GKE cluster. Looking at the documentation, hostPath is not the choice for GKE. So, what the correct way of doing this?
Are you sure your home directory is visible to Kubernetes? Minikube generally creates a little VM to run things in, which wouldn't have your home dir in it. The more usual way to handle this would be to make a very small new Docker image yourself like:
FROM mariadb:10.2
COPY dbdump.sql /docker-entrypoint-initdb.d/
And then push it to a registry somewhere, and then use that image instead.
I am trying to run a Postgresql database using minikube with a persistent volume claim. These are the yaml specifications:
minikube-persistent-volume.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: hostpath
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/Users/jonathan/data"
postgres-persistent-volume-claim.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-postgres
spec:
accessModes: [ "ReadWriteMany" ]
resources:
requests:
storage: 2Gi
postgres-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- image: postgres:9.5
name: postgres
ports:
- containerPort: 5432
name: postgres
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-disk
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_USER
value: keycloak
- name: POSTGRES_DATABASE
value: keycloak
- name: POSTGRES_PASSWORD
value: key
- name: POSTGRES_ROOT_PASSWORD
value: masterkey
volumes:
- name: postgres-disk
persistentVolumeClaim:
claimName: pv-postgres
when I start this I get the following in the logs from the deployment:
[...]
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok
initdb: could not create directory "/var/lib/postgresql/data/pgdata/pg_xlog": Permission denied
initdb: removing contents of data directory "/var/lib/postgresql/data/pgdata"
Why do I get this Permission denied error and what can I do about it?
Maybe you're having a write permission issue with Virtualbox mounting those host folders.
Instead, use /data/postgres as a path and things will work.
Minikube automatically persists the following directories so they will be preserved even if your VM is rebooted/recreated:
/data
/var/lib/localkube
/var/lib/docker
Read these sections for more details:
https://github.com/kubernetes/minikube#persistent-volumes
https://github.com/kubernetes/minikube#mounted-host-folders