I've created the persistent volume (EBS 10G) and corresponding persistent volume claim first. But when I try to deploy the postgresql pods as below (yaml file) :
test-postgresql.yaml
Receive the errors from pod:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.
Why the pod can't use this path? I've tried the same tests on minikube. I didn't meet any problem.
I tried to change volume mount directory path to "/var/lib/test/data", the pods can be running. I created a new table and some data on it, and then killed this pod. Kubernete created a new pod. But the new one didn't preserve the previous data and table.
So what's the way to correctly mount a postgresql volume using Aws EBS in Kubernete, which allows the recreated pods can reuse initial data base stored in EBS?
So what's the way to correctly mount a postgresql volume using Aws EBS
You are on a right path...
Error you get is because you want to use root folder of mounted volume / as postgresql Data dir and postgresql complains that it is not best practice to do so since it is not empty and contains already some data inside (namely lost+found directory).
It is far better to locate data dir in separate empty subfolder (/postgres for example) and give postgresql clean slate when creating its file structure. You didn't get same thing on minicube since you most probably mounted host folder that didn't have anything inside (was empty) and didn't trigger such a complaint.
To do so, you would need initially empty subPath of your volume (empty /postgres subfolder on your PV for example) mounted to appropriate mount point (/var/lib/posgresql/data) in your pod. Note that you can name subPath and mount point end folder the same name, they are different here just as an example where test-db-volume/postgres folder would be mounted on pod to /var/lib/postgresql/data folder:
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: test-db-volume
subPath: postgres
...
I fixed this by telling postgres where i want the data base to be created with the PGDATA env. It creates the empty directory and inits the DB. If you dont have this then it assumes you want to create it in the room mount directory which for me had the ;ost+found directory that postgres did not like
containers:
- name: postgres
imagePullPolicy: Always
image: postgres:9.6
ports:
- containerPort: 5432
name: postgres
env:
- name: POSTGRES_DB
value: "mydb"
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
this is from dockerhub description...
PGDATA This optional variable can be used to define another location -
like a subdirectory - for the database files. The default is
/var/lib/postgresql/data, but if the data volume you're using is a
filesystem mountpoint (like with GCE persistent disks), Postgres
initdb recommends a subdirectory (for example
/var/lib/postgresql/data/pgdata ) be created to contain the data.
so, create one more deeper dicerctory and it works
Use the subpath as described below
- name: postgredb
mountPath: /var/lib/postgresql/data
#setting subPath - it can be any arbitrary name
subPath: postgres
Related
I have a container with an empty dir volume
- volumes:
emptyDir: {}
name: someName
I would like to copy all data to my machine using kubectl cp.
I do not know where the someName volume is located. How can I find out and how can I copy the data from the volume to my local machine?
You have to check in your pod where the volume is mounted. Check in the container sections, for a mount with the name someName, e.g:
containers:
volumeMounts:
- name: someName
mountPath: "/mnt/path"
So you know that the emptyDir is mounted at the given mountPath.
Afterwards you can copy the files via
kubectl cp my-namespace/my-pod:/mnt/path /tmp/local/path
I am running a docker of mongodb, and notice a volume created and mounted to /data/configdb. this is in addition to another volume mapped to /data/db which I know is where my actual data is stored.
I am trying to find out what is stored in /data/configdb, and searched google for it. surprisingly enough I didn't find anything explaining what is stored there.
what is stored there (/data/configdb), and can it be discarded everytime I restart my mongodb container?
To summarize from the docs, config servers store the metadata for a sharded cluster, and /data/configdb is the default path where a config server stores its data files. So if you're not dealing with sharded clusters, removing any references to it should be ok.
From the docs:
--configsvr
Declares that this mongod instance serves as the config server of a sharded cluster. When running with this option, clients (i.e. other cluster components) will not be able to write data to any database other than config and admin. The default port for a mongod with this option is 27019 and the default --dbpathdirectory is /data/configdb, unless specified.
References:
https://docs.mongodb.com/manual/core/sharded-cluster-config-servers/
https://docs.mongodb.com/v3.4/reference/program/mongod/#cmdoption-configsvr
Hope this helps!
I have deployed docker image of mongodb https://hub.docker.com/_/mongo on OpenShift it generated out of the box the deployment.yaml file which contains part with two volumeMounts:
- mountPath: /data/configdb
name: mongo-5-0-1
- mountPath: /data/db
name: mongo-5-0-2
When I want to use single not sharded database server and store my data into volume, this is how this part of deployment.yaml should look like
volumes:
- name: mongo-5-0-1
emptyDir: {}
- name: mongo-5-0-2
persistentVolumeClaim:
claimName: myVolumeName
In image documentation it's mentioned that:
This image also defines a volume for /data/configdb for use with
--configsvr (see docs.mongodb.com for more details).
Can we use nfs volume plugin to maintain the High Availability and Disaster Recovery among the kubernetes cluster?
I am running the pod with MongoDB. Getting the error
chown: changing ownership of '/data/db': Operation not permitted .
Cloud any body, Please suggest me how to resolve the error? (or)
Is any alternative volume plugin is suggestible to achieve HA- DR in kubernetes cluster?
chown: changing ownership of '/data/db': Operation not permitted .
You'll want to either launch the mongo container as root, so that you can chown the directory, or if the image prohibits it (as some images already have a USER mongo clause that prohibits the container from escalating privileges back up to root), then one of two things: supersede the user with a securityContext stanza in containers: or use an initContainer: to preemptively change the target folder to be the mongo UID:
Approach #1:
containers:
- name: mongo
image: mongo:something
securityContext:
runAsUser: 0
(which may require altering your cluster's config to permit such a thing to appear in a PodSpec)
Approach #2 (which is the one I use with Elasticsearch images):
initContainers:
- name: chmod-er
image: busybox:latest
command:
- /bin/chown
- -R
- "1000" # or whatever the mongo UID is, use string "1000" not 1000 due to yaml
- /data/db
volumeMounts:
- name: mongo-data # or whatever
mountPath: /data/db
containers:
- name: mongo # then run your container as before
/data/db is a mountpoint, even if you don't explicitly mount a volume there. The data is persisted to an overlay specific to the pod.
Kubernetes mounts all volumes as 0755 root.root, regardless of what the permissions for the directory were intially.
Of course mongo cannot chown that.
If you mount the volume somewhere below /data/db, you will get the same error.
And if you mount the volume above at /data, the data will not be stored on the NFS because the mountpoint at /data/db will write to the overlay instead. But you won't get that error anymore.
By adding command:["mongod"] in your Deployment Manifest, it will override the default entrypoint script and will prevent executing the chown.
...
spec:
containers:
- name: mongodb
image: mongo:4.4.0-bionic
command: ["mongod"]
...
Instead of mounting /data/db, we could mount /data. Internally mongo will create /data/db. During entrypoint, mongo tries to chown this directory but if we mount a volume directory to this mount point, as a mongo container user - it will not be able to chown. That's the cause of the issue
Here is a sample of working mongo deployment yaml
...
spec:
containers:
- name: mongo
image: mongo:latest
volumeMounts:
- mountPath: /data
name: mongo-db-volume
volumes:
- hostPath:
path: /Users/name/mongo-data
type: Directory
name: mongo-db-volume
...
I have a directory within container A that I would like to share with container B.
For example I have a directory /dataabc on container A.
I've tried using a shared hostPath volume, however as this is empty when mounted - it makes the existing files non accessible. (/dataabc would be mounted on top of the existing /dataabc/ from container A.
I could copy the files over on container startup - but this requires modification to the container. Is there a more simple way that does not require modification to the container?
Big thanks to #graham, I could reuse the existing container with just this minor modification to the pod config:
initContainers:
- args:
- cp -r /var/www / && ls -altr /www/
command:
- /bin/sh
- -c
image: example
imagePullPolicy: Always
name: example-init
volumeMounts:
- mountPath: /www
name: webroot
I'm looking into how to mount volumes with docker-compose for data persistence but I'm having trouble understanding all the examples I read.
https://www.linux.com/learn/docker-volumes-and-networks-compose
version: '2'
services:
mysql:
image: mysql
container_name: mysql
volumes:
- mysql:/var/lib/mysql
...
volumes:
mysql:
Ok so this defines a volume named mysql at the bottom and it references this volume in
- mysql:/var/lib/mysql
How will the mysql image know to look in this volume named mysql? Is it just designed to look in all the volumes it has to store data or something?
Then in other examples I see the following:
services:
nginx:
image: nginx
depends_on:
- ghost
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
networks:
- proxy
This example doesn't need to define a volume, why is that?
your MySQL data will be stored in the named volume mysql which is created by:
volumes:
mysql:
You can list the docker volumes using docker volume ls and the 'path' will be something like: /var/lib/docker/volumes/mysql/date. When you cd in this folder you will see the same data as the data which is in your mysql container on path: /var/lib/mysql. If you exec inside your container you will see the same data.
How does it know how to use this path?
Well check the Dockerfile of mysql. Here is:
VOLUME /var/lib/mysql
In short: all the data of your mysql is stored in /var/lib/mysql inside your container and mounted to your named docker volume mysql on your host, which path is something like /var/lib/docker/volumes/mysql/data/.
The next part is mounting ./default.conf (on your host, relative path) on the path /etc/nginx/conf.d/default.conf inside your nginx container.
Nginx and ghost don't need a named volume in this case because they don't need to keep specific data. When you create your environment you will add data using Ghost (write blogs), but the data itself will be stored in the mysql database. Not in the Ghost container.
Remark (if your second example has nothing to do with the mysql example): the default image of ghost is working with the sqlite3 db which is inside the same container (=! microservice for each container so this is fine to develop, not in production). But if you would use this setup you need to create a named volume for your sqlite which is in the same container as ghost. Take a look to the dockerfile of ghost.
If you want to use mysql you probably need to mount a config file to your ghost container to tell the container: use mysql, you will not need a named docker volume for ghost then, because data won't be stored in the ghost container but in the mysql container.
To keep your last example persistent without using mysql with a named volume you have to add a volume for the sqlite db which is inside the ghost container for this path: /var/lib/ghost/content. Check the Dockerfile again to see this path.
This blog post explains how to setup ghost with mysql in docker-compose