How to deploy Postgresql on Kubernetes with NFS volume - postgresql

I'm using the below manifest to deploy postgresql on kubernetes within NFS persistent volume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs2
spec:
capacity:
storage: 6Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 10.139.82.123
path: /nfsfileshare/postgres
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 6Gi
---
apiVersion: v1
kind: Service
metadata:
name: db
labels:
app: aiflow-db
spec:
selector:
app: aiflow-db
clusterIP: None
ports:
- port: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: test-aiflow
labels:
app: aiflow-db
spec:
selector:
matchLabels:
app: aiflow-db
template:
metadata:
labels:
app: aiflow-db
spec:
containers:
- name: db
image: postgresql:10
ports:
- containerPort: 5432
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- mountPath: /var/lib/postgresql/data/pgdata
name: nfs2
volumes:
- name: nfs2
persistentVolumeClaim:
claimName: nfs2
restartPolicy: Always
The pg data can be mounted to nfs server (/nfsfileshare/postgres *(rw,async,no_subtree_check,no_root_squash)):
total 124
drwx------ 19 999 root 4096 Aug 7 11:10 ./
drwxrwxrwx 5 root root 4096 Aug 7 10:28 ../
drwx------ 3 999 docker 4096 Aug 7 11:02 base/
drwx------ 2 999 docker 4096 Aug 7 11:10 global/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_commit_ts/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_dynshmem/
-rw------- 1 999 docker 4513 Aug 7 11:02 pg_hba.conf
-rw------- 1 999 docker 1636 Aug 7 11:02 pg_ident.conf
drwx------ 4 999 docker 4096 Aug 7 11:09 pg_logical/
drwx------ 4 999 docker 4096 Aug 7 11:01 pg_multixact/
drwx------ 2 999 docker 4096 Aug 7 11:10 pg_notify/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_replslot/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_serial/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_snapshots/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_stat/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_stat_tmp/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_subtrans/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_tblspc/
drwx------ 2 999 docker 4096 Aug 7 11:01 pg_twophase/
-rw------- 1 999 docker 3 Aug 7 11:02 PG_VERSION
drwx------ 3 999 docker 4096 Aug 7 11:02 pg_wal/
drwx------ 2 999 docker 4096 Aug 7 11:02 pg_xact/
-rw------- 1 999 docker 88 Aug 7 11:02 postgresql.auto.conf
-rw------- 1 999 docker 22729 Aug 7 11:02 postgresql.conf
-rw------- 1 999 docker 74 Aug 7 11:10 postmaster.pid
However the container is stuck with below log:
The files belonging to this database system will be owned by user
"postgres". This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8". The
default database encoding has accordingly been set to "UTF8". The
default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... ok creating subdirectories ... ok
selecting default max_connections ... 100 selecting default
shared_buffers ... 128MB selecting dynamic shared memory
implementation ... posix creating configuration files ... ok running
bootstrap script ... ok
Seems it stuck on post-bootstrap initialization.
It works only if I do not use nfs volume (works by using hostPath volume), why is that?

NFS does not support fsync kernel vfs call which is required transaction logs for ensuring the writing out the redo logs on the disk. So you should use block storage when you need to use RDBMS, such as PostgreSQL and MySQL. You might lose the data consistency althogh you can run the one on the NFS.

I meet the same question, When I use helm deploy the gitlab, the postgresql can't run, and errors below:
FATAL: data directory "/var/lib/postgresql/data/pgdata" has wrong ownership.
HINT: The server must be started by the user that owns the data directory.
I think it's because the postgresql run property need it's data should be own by user postgres and group postgres, but the nfs change the own user and group, makes the postgresql can't run.
Maybe change another tools like glusterfs can solve this problem, or try mysql's data mount by nfs.

Related

k8s: configmap mounted inside symbolic link to "..data" directory

Here my volumeMount:
volumeMounts:
- name: interpreter-spec-volume
mountPath: /zeppelin/k8s/interpreter
Here my volumes:
volumes:
- name: interpreter-spec-volume
configMap:
name: zeppelin-files
items:
- key: interpreter-spec.yaml
path: interpreter-spec.yaml
Problem arises how volume is mounted. My volumeMount is mounted as:
kubectl exec -ti zeppelin-759db57cb6-xw42b -- ls -la /zeppelin/k8s/interpreter
total 0
drwxrwxrwx. 3 root root 88 Jul 7 13:18 .
drwxr-xr-x. 3 root root 53 Jun 8 12:12 ..
drwxr-xr-x. 2 root root 35 Jul 7 13:18 ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 31 Jul 7 13:18 ..data -> ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 28 Jul 7 13:18 interpreter-spec.yaml -> ..data/interpreter-spec.yaml
Why it's mounting ..data directory to itself?
What can I say - this is almost not documented expected behavior. This is due to how secrets and configmaps are mounted into the running container.
When you mount a secret or configmap as volume, the path at which Kubernetes will mount it will contain the root level items symlinking the same names into a ..data directory, which is symlink to real mountpoint.
For example,
kubectl exec -ti zeppelin-759db57cb6-xw42b -- ls -la /zeppelin/k8s/interpreter
total 0
drwxrwxrwx. 3 root root 88 Jul 7 13:18 .
drwxr-xr-x. 3 root root 53 Jun 8 12:12 ..
drwxr-xr-x. 2 root root 35 Jul 7 13:18 ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 31 Jul 7 13:18 ..data -> ..2020_07_07_13_18_32.149716995
lrwxrwxrwx. 1 root root 28 Jul 7 13:18 interpreter-spec.yaml -> ..data/interpreter-spec.yaml
The real mountpoint (..2020_07_07_13_18_32.149716995 in the example above) will change each time the secret or configmap(in your case) is updated, so the real path of your interpreter-spec.yaml will change after each update.
What you can do is use subpath option in volumeMounts. By design, a container using secrets and configmaps as a subPath volume mount will not receive updates. You can leverage on this feature to singularly mount files. You will need to change the pod spec each time you add/remove any file to/from the secret / configmap, and a deployment rollout will be required to apply changes after each secret / configmap update.
volumeMounts:
- name: interpreter-spec-volume
mountPath: /zeppelin/k8s/interpreter
subPath: interpreter-spec.yaml
volumes:
- name: interpreter-spec-volume
configMap:
name: zeppelin-files
I would also like to mention Kubernetes config map symlinks (..data/) : is there a way to avoid them? question here, where you may find additional info.

The server must be started by the user that owns the data directory

I am trying to get some persistant storage for a docker instance of PostgreSQL running on Kubernetes. However, the pod fails with
FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
This is the NFS configuration:
% exportfs -v
/srv/nfs/postgresql/postgres-registry
kubehost*.example.com(rw,wdelay,insecure,no_root_squash,no_subtree_check,sec=sys,rw,no_root_squash,no_all_squash)
$ ls -ldn /srv/nfs/postgresql/postgres-registry
drwxrwxrwx. 3 999 999 4096 Jul 24 15:02 /srv/nfs/postgresql/postgres-registry
$ ls -ln /srv/nfs/postgresql/postgres-registry
total 4
drwx------. 2 999 999 4096 Jul 25 08:36 pgdata
The full log from the pod:
2019-07-25T07:32:50.617532000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T07:32:50.618113000Z This user must also own the server process.
2019-07-25T07:32:50.619048000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T07:32:50.619496000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T07:32:50.619943000Z The default text search configuration will be set to "english".
2019-07-25T07:32:50.620826000Z Data page checksums are disabled.
2019-07-25T07:32:50.621697000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T07:32:50.647445000Z creating subdirectories ... ok
2019-07-25T07:32:50.765065000Z selecting default max_connections ... 20
2019-07-25T07:32:51.035710000Z selecting default shared_buffers ... 400kB
2019-07-25T07:32:51.062039000Z selecting default timezone ... Etc/UTC
2019-07-25T07:32:51.062828000Z selecting dynamic shared memory implementation ... posix
2019-07-25T07:32:51.218995000Z creating configuration files ... ok
2019-07-25T07:32:51.252788000Z 2019-07-25 07:32:51.251 UTC [79] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T07:32:51.253339000Z 2019-07-25 07:32:51.251 UTC [79] HINT: The server must be started by the user that owns the data directory.
2019-07-25T07:32:51.262238000Z child process exited with exit code 1
2019-07-25T07:32:51.263194000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T07:32:51.380205000Z running bootstrap script ...
The deployment has the following in:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
What am I doing wrong?
Edit: Added storage.yaml file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.3.7
path: /srv/nfs/postgresql/postgres-registry
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Edit: And the full deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
subPath: "pgdata"
name: postgredb-registry-persistent-storage
volumes:
- name: postgredb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
Even more debugging adding:
command: ["/bin/bash", "-c"]
args:["id -u; ls -ldn /var/lib/postgresql/data"]
Which returned:
999
drwx------. 2 99 99 4096 Jul 25 09:11 /var/lib/postgresql/data
Clearly, the UID/GID are wrong. Why?
Even with the work around suggested by Jakub Bujny, I get this:
2019-07-25T09:32:08.734807000Z The files belonging to this database system will be owned by user "postgres".
2019-07-25T09:32:08.735335000Z This user must also own the server process.
2019-07-25T09:32:08.736976000Z The database cluster will be initialized with locale "en_US.utf8".
2019-07-25T09:32:08.737416000Z The default database encoding has accordingly been set to "UTF8".
2019-07-25T09:32:08.737882000Z The default text search configuration will be set to "english".
2019-07-25T09:32:08.738754000Z Data page checksums are disabled.
2019-07-25T09:32:08.739648000Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2019-07-25T09:32:08.766606000Z creating subdirectories ... ok
2019-07-25T09:32:08.852381000Z selecting default max_connections ... 20
2019-07-25T09:32:09.119031000Z selecting default shared_buffers ... 400kB
2019-07-25T09:32:09.145069000Z selecting default timezone ... Etc/UTC
2019-07-25T09:32:09.145730000Z selecting dynamic shared memory implementation ... posix
2019-07-25T09:32:09.168161000Z creating configuration files ... ok
2019-07-25T09:32:09.200134000Z 2019-07-25 09:32:09.199 UTC [70] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25T09:32:09.200715000Z 2019-07-25 09:32:09.199 UTC [70] HINT: The server must be started by the user that owns the data directory.
2019-07-25T09:32:09.208849000Z child process exited with exit code 1
2019-07-25T09:32:09.209316000Z initdb: removing contents of data directory "/var/lib/postgresql/data"
2019-07-25T09:32:09.274741000Z running bootstrap script ... 999
2019-07-25T09:32:09.278124000Z drwx------. 2 99 99 4096 Jul 25 09:32 /var/lib/postgresql/data
Using your setup and ensuring the nfs mount is owned by 999:999 it worked just fine.
You're also missing an 's' in your name: postgredb-registry-persistent-storage
And with your subPath: "pgdata" do you need to change the $PGDATA? I didn't include the subpath for this.
$ sudo mount 172.29.0.218:/test/nfs ./nfs
$ sudo su -c "ls -al ./nfs" postgres
total 8
drwx------ 2 postgres postgres 4096 Jul 25 14:44 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
$ kubectl apply -f nfspv.yaml
persistentvolume/postgres-registry-pv-volume created
persistentvolumeclaim/postgres-registry-pv-claim created
$ kubectl apply -f postgres.yaml
deployment.extensions/postgres-registry created
$ sudo su -c "ls -al ./nfs" postgres
total 124
drwx------ 19 postgres postgres 4096 Jul 25 14:46 .
drwxrwxr-x 3 rei rei 4096 Jul 25 14:44 ..
drwx------ 3 postgres postgres 4096 Jul 25 14:46 base
drwx------ 2 postgres postgres 4096 Jul 25 14:46 global
drwx------ 2 postgres postgres 4096 Jul 25 14:46 pg_commit_ts
. . .
I noticed using nfs: directly in the persistent volume took significantly longer to initialize the database, whereas using hostPath: to the mounted nfs volume behaved normally.
So after a few minutes:
$ kubectl logs postgres-registry-675869694-9fp52 | tail -n 3
2019-07-25 21:50:57.181 UTC [30] LOG: database system is ready to accept connections
done
server started
$ kubectl exec -it postgres-registry-675869694-9fp52 psql
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.
postgres=#
Checking the uid/gid
$ kubectl exec -it postgres-registry-675869694-9fp52 bash
postgres#postgres-registry-675869694-9fp52:/$ whoami && id -u && id -g
postgres
999
999
nfspv.yaml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-registry-pv-volume
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 172.29.0.218
path: /test/nfs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-registry-pv-claim
labels:
app: postgres-registry
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
postgres.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-registry
spec:
replicas: 1
template:
metadata:
labels:
app: postgres-registry
spec:
securityContext:
runAsUser: 999
supplementalGroups: [999,1000]
fsGroup: 999
containers:
- name: postgres-registry
image: postgres:latest
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: postgres
- name: POSTGRES_PASSWORD
value: Sekret
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresdb-registry-persistent-storage
volumes:
- name: postgresdb-registry-persistent-storage
persistentVolumeClaim:
claimName: postgres-registry-pv-claim
I cannot explain why those 2 IDs are different but as workaround I would try to override postgres's entrypoint with
command: ["/bin/bash", "-c"]
args: ["chown -R 999:999 /var/lib/postgresql/data && ./docker-entrypoint.sh postgres"]
This type of errors is quite common when you link a NTFS directory into your docker container. NTFS directories don't support ext3 file & directory access control. The only way to make it work is to link directory from a ext3 drive into your container.
I got a bit desperate when I played around Apache / PHP containers with linking the www folder. After I linked files reside on a ext3 filesystem the problem disappear.
I published a short Docker tutorial on youtube, may it helps to understand this problem: https://www.youtube.com/watch?v=eS9O05TTFjM

Kubernetes config map symlinks (..data/) : is there a way to avoid them?

I have noticed that when I create and mount a config map that contains some text files, the container will see those files as symlinks to ../data/myfile.txt .
For example, if my config map is named tc-configs and contains 2 xml files named stripe1.xml and stripe2.xml, if I mount this config map to /configs in my container, I will have, in my container :
bash-4.4# ls -al /configs/
total 12
drwxrwxrwx 3 root root 4096 Jun 4 14:47 .
drwxr-xr-x 1 root root 4096 Jun 4 14:47 ..
drwxr-xr-x 2 root root 4096 Jun 4 14:47 ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 31 Jun 4 14:47 ..data -> ..2018_06_04_14_47_03.291041453
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe1.xml -> ..data/stripe1.xml
lrwxrwxrwx 1 root root 18 Jun 4 14:47 stripe2.xml -> ..data/stripe2.xml
I guess Kubernetes requires those symlinks and ../data and ..timestamp/ folders, but I know some applications that can fail to startup if they see non expected files or folders
Is there a way to tell Kubernetes not to generate all those symlinks and directly mount the files ?
I think this solution is satisfactory : specifying exact file path in mountPath, will get rid of the symlinks to ..data and ..2018_06_04_19_31_41.860238952
So if I apply such a manifest :
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html/users.xml
name: site-data
subPath: users.xml
volumes:
- name: site-data
configMap:
name: users
---
apiVersion: v1
kind: ConfigMap
metadata:
name: users
data:
users.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<users>
</users>
Apparently, I'm making use of subpath explicitly, and they're not part of the "auto update magic" from ConfigMaps, I won't see any more symlinks :
$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxr-xr-x 1 www-data www-data 4096 Jun 4 19:18 .
drwxr-xr-x 1 root root 4096 Jun 4 17:58 ..
-rw-r--r-- 1 root root 73 Jun 4 19:18 users.xml
Be careful to not forget subPath, otherwise users.xml will be a directory !
Back to my initial manifest :
spec:
containers:
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
volumes:
- name: site-data
configMap:
name: users
I'll see those symlinks coming back :
$ kubectl exec my-lamp-site -c php -- ls -al /var/www/html
total 12
drwxrwxrwx 3 root root 4096 Jun 4 19:31 .
drwxr-xr-x 3 root root 4096 Jun 4 17:58 ..
drwxr-xr-x 2 root root 4096 Jun 4 19:31 ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 31 Jun 4 19:31 ..data -> ..2018_06_04_19_31_41.860238952
lrwxrwxrwx 1 root root 16 Jun 4 19:31 users.xml -> ..data/users.xml
Many thanks to psycotica0 on K8s Canada slack for putting me on the right track with subpath (they are quickly mentioned in configmap documentation)
I am afraid I don't know if you can tell Kubernetes not to generate those symlinks although I think that it is a native behaviour.
If having those files and links is an issue, a workaround that I can think of is to mount the configmap on one folder and copy the files over to another folder when you initialise the container:
initContainers:
- name: copy-config
image: busybox
command: ['sh', '-c', 'cp /configmap/* /configs']
volumeMounts:
- name: configmap
mountPath: /configmap
- name: config
mountPath: /configs
But you would have to declare two volumes, one for the configMap (configmap) and one for the final directory (config):
volumes:
- name: config
emptyDir: {}
- name: configmap
configMap:
name: myconfigmap
Change the type of volume for the config volume as you please obviously.

How to map one single file into kubernetes pod using hostPath?

I have one own nginx configuration /home/ubuntu/workspace/web.conf generated by script. I prefer to have it under /etc/nginx/conf.d besides default.conf
Below is the nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: webconf
hostPath:
path: /home/ubuntu/workspace/web.conf
containers:
- image: nginx
name: nginx
ports:
- containerPort: 18001
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx/conf.d/web.conf
name: web
While it is mapped as folder only
$ kubectl create -f nginx.yaml
pod "nginx" created
$ kubectl exec -it nginx -- bash
root#nginx:/app# ls -al /etc/nginx/conf.d/
total 12
drwxr-xr-x 1 root root 4096 Aug 3 12:27 .
drwxr-xr-x 1 root root 4096 Aug 3 11:46 ..
-rw-r--r-- 2 root root 1093 Jul 11 13:06 default.conf
drwxr-xr-x 2 root root 0 Aug 3 11:46 web.conf
It works for docker container -v hostfile:containerfile.
How can I do this in kubernetes ?
BTW: I use minikube 0.21.0 on Ubuntu 16.04 LTS with kvm
Try using the subPath key on your volumeMounts like this:
apiVersion: v1
kind: Pod
metadata:
name: singlefile
spec:
containers:
- image: ubuntu
name: singlefiletest
command:
- /bin/bash
- -c
- ls -la /singlefile/ && cat /singlefile/hosts
volumeMounts:
- mountPath: /singlefile/hosts
name: etc
subPath: hosts
volumes:
- name: etc
hostPath:
path: /etc
Example:
$ kubectl apply -f singlefile.yaml
pod "singlefile" created
$ kubectl logs singlefile
total 24
drwxr-xr-x. 2 root root 4096 Aug 3 12:50 .
drwxr-xr-x. 1 root root 4096 Aug 3 12:50 ..
-rw-r--r--. 1 root root 1213 Apr 26 21:25 hosts
# /etc/hosts: Local Host Database
#
# This file describes a number of aliases-to-address mappings for the for
# local hosts that share this file.
...
Actually it is caused by kvm which is used by minikube.
path: /home/ubuntu/workspace/web.conf
If I login to minikube, it is folder in vm.
$ ls -al /home/ubuntu/workspace # in minikube host
total 12
drwxrwxr-x 2 ubuntu ubuntu 4096 Aug 3 12:11 .
drwxrwxr-x 5 ubuntu ubuntu 4096 Aug 3 19:28 ..
-rw-rw-r-- 1 ubuntu ubuntu 1184 Aug 3 12:11 web.conf
$ minikube ssh
$ ls -al /home/ubuntu/workspace # in minikube vm
total 0
drwxr-xr-x 3 root root 0 Aug 3 19:41 .
drwxr-xr-x 4 root root 0 Aug 3 19:41 ..
drwxr-xr-x 2 root root 0 Aug 3 19:41 web.conf
I don't know exactly why kvm host folder sharing behalf like this.
Therefore instead I use minikube mount command, see host_folder_mount.md, then it works as expected.

Mongodb container's data becomes "read-only" after restarting kubernetes, with glusterfs as storage?

My mongo is running as a docker container on the kubernetes, with glusterfs providing persistent volume. After I restart kuberntes (the machine power off and restart), all the mongo pods cannot come back, their logs:
chown: changing ownership of `/data/db/user_management.ns': Read-only file system
chown: changing ownership of `/data/db/storage.bson': Read-only file system
chown: changing ownership of `/data/db/local.ns': Read-only file system
chown: changing ownership of `/data/db/mongod.lock': Read-only file system
Here /data/db/ is the mounted gluster volume and I can make sure it's rw mode!:
# kubectl get pod mongoxxx -o yaml
apiVersion: v1
kind: Pod
spec:
containers:
- image: mongo:3.0.5
imagePullPolicy: IfNotPresent
name: mongo
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data/db
name: mongo-storage
volumes:
- name: mongo-storage
persistentVolumeClaim:
claimName: auth-mongo-data
# kubectl describe pod mongoxxx
...
Volume Mounts:
/data/db from mongo-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wdrfp (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
mongo-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: auth-mongo-data
ReadOnly: false
...
# kubect get pv xxx
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
name: auth-mongo-data
resourceVersion: "215201"
selfLink: /api/v1/persistentvolumes/auth-mongo-data
uid: fb74a4b9-e0a3-11e6-b0d1-5254003b48ea
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 4Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: auth-mongo-data
namespace: default
glusterfs:
endpoints: glusterfs-cluster
path: infra-auth-mongo
persistentVolumeReclaimPolicy: Retain
status:
phase: Bound
And when I ls on the kubernetes node:
# ls -ls /var/lib/kubelet/pods/fc6c9ef3-e0a3-11e6-b0d1-5254003b48ea/volumes/kubernetes.io~glusterfs/auth-mongo-data/
total 163849
4 drwxr-xr-x. 2 mongo input 4096 1月 22 21:18 journal
65536 -rw-------. 1 mongo input 67108864 1月 22 21:16 local.0
16384 -rw-------. 1 mongo root 16777216 1月 23 17:15 local.ns
1 -rwxr-xr-x. 1 mongo root 2 1月 23 17:15 mongod.lock
1 -rw-r--r--. 1 mongo root 69 1月 23 17:15 storage.bson
4 drwxr-xr-x. 2 mongo input 4096 1月 22 21:18 _tmp
65536 -rw-------. 1 mongo input 67108864 1月 22 21:18 user_management.0
16384 -rw-------. 1 mongo root 16777216 1月 23 17:15 user_management.ns
I cannot chown though the volume is mounted as rw.
My host is CentOs 7.3:
Linux c4v160 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux.
I guess that it is because the glusterfs volume I have provided is unclean. The glusterfs volume infra-auth-mongo may consist of dirty directories. One solution is to remove this volume and create another.
Another solution is to hack mongodb dockerfile, force it change the ownership of /data/db before starting mongodb process. Like this: https://github.com/harryge00/mongo/commit/143bfc317e431692010f09b5c0d1f28395d2055b