Permission error with mongo when running docker - mongodb

My docker-compose:
version: "2"
services:
api:
build: .
ports:
- "3007:3007"
links:
- mongo
volumes:
- .:/opt/app
mongo:
image: mongo
volumes:
- /data/db:/data/db
ports:
- "27017:27017"
I get permissionerror:
mongo_1 | chown: changing ownership of '/data/db/diagnostic.data/metrics.2017-06-27T13-32-30Z-00000': Operation not permitted
mongo_1 | chown: changing ownership of '/data/db/journal/WiredTigerLog.0000000054': Operation not permitted
mongo_1 | chown: changing ownership of '/data/db/journal/WiredTigerPreplog.0000000001': Operation not permitted
mongo_1 | chown: changing ownership of '/data/db/journal/WiredTigerPreplog.0000000002': Operation not permitted
mongo_1 | chown: changing ownership of '/data/db/WiredTiger.turtle': Operation not permitted
mongo_1 | chown: changing ownership of '/data/db/WiredTigerLAS.wt': Operation not permitted
ls-la on data:
ls -la data
total 0
drwxrwxrwx 3 root wheel 102 Dec 1 2016 .
drwxr-xr-x 35 root wheel 1258 Jun 25 04:29 ..
drwxrwxrwx# 118 root wheel 4012 Jun 27 15:33 db
If I manually change the permission of /data/db, it will be changed back.
What is the problem here? There's no problem if I run mongo locally.

I had this issue in CentOS and the solution was to turn on SELinux:
setenforce 0
It's not a mongo problem. It's a docker problem actually. when docker wants to map the volume it tries to change the permission and failed do to the user/group/selinux restrictions.
UPDATE:
There's a chown command in entrypoint.sh that tries to change the permission of directories and files in the mapped volume. read more in here.

Only the root or members of the sudo group can change the ownership of a file/directory. When you run mongodb in docker and attach a volume from the host, mongo is trying to run as the mongod user. Since this user doesn't exist on your host and root owns the volume mongod/docker is trying to own the OS looks at this as a permissions problem and you will see that error. You have a few options:
Configure mongo to run as root via editing the mongo config and copying it during the docker build process. This assumes you're using a docker file to build that image. Then it will have no problem accessing the attached volume.
Create a mongod user & group on the host and change the ownership of the data directory to that user that the OS sees no difference in ownership/permissions.
Rearchitect your system so mongo can use the default container data store size for its life and completely forgo the volume mount.

instead of mount with the directory, you can change to mount with the volume.
version: "2"
services:
api:
build: .
ports:
- "3007:3007"
links:
- mongo
volumes:
- .:/opt/app
mongo:
image: mongo
volumes:
- mongodata:/data/db
ports:
- "27017:27017"
volumes:
mongodata
This is the issue in mongo: https://github.com/docker-library/mongo/issues/232#issuecomment-355423692

Related

Docker-Compose postgres upgrade initdb: error: directory "/var/lib/postgresql/data" exists but is not empty

I had postgres 11 installed using docker-compose. I wanted to upgrade it to 12 but even though I have removed the container and its volume but the status of the container says "Restarting".
Here is my docker-compose file
version: '3.5'
services:
postgres:
image: postgres:12
environment:
POSTGRES_HOST_AUTH_METHOD: "trust"
ports:
- "5432"
restart: always
volumes:
- /etc/postgresql/12/postgresql.conf:/var/lib/postgresql/data/postgresql.conf
- db_data:/var/lib/postgresql/data
volumes:
db_data:
However it is not working and the logs has the following issue
2020-07-02T12:54:47.012973448Z The files belonging to this database system will be owned by user "postgres".
2020-07-02T12:54:47.013030445Z This user must also own the server process.
2020-07-02T12:54:47.013068962Z
2020-07-02T12:54:47.013222608Z The database cluster will be initialized with locale "en_US.utf8".
2020-07-02T12:54:47.013261425Z The default database encoding has accordingly been set to "UTF8".
2020-07-02T12:54:47.013281815Z The default text search configuration will be set to "english".
2020-07-02T12:54:47.013293326Z
2020-07-02T12:54:47.013303793Z Data page checksums are disabled.
2020-07-02T12:54:47.013313919Z
2020-07-02T12:54:47.013450079Z initdb: error: directory "/var/lib/postgresql/data" exists but is not empty
2020-07-02T12:54:47.013487706Z If you want to create a new database system, either remove or empty
2020-07-02T12:54:47.013501126Z the directory "/var/lib/postgresql/data" or run initdb
2020-07-02T12:54:47.013512379Z with an argument other than "/var/lib/postgresql/data".
How could I remove or empty this /var/lib/postgresql/data when the container is constantly restarting?
Thanks in advance
Quoting #yosifkit from this issue
The volume needs to be empty or a valid already initialized postgres
database with the file PG_VERSION in there so the init can be
skipped.
... If there are any files or folders in there like lost+found it
will probably fail to initialize. If there are files that you want to
keep in the volume (or have no control over) you could adjust the
PGDATA environment variable to point to a sub-directory in there
like -e PGDATA=/var/lib/postgresql/data/db-files/.
So I added PGDATA to the environment section of the compose file to solve the issue (notice the some_name at the end):
services:
postgres:
image: postgres:12
environment:
PGDATA: /var/lib/postgresql/data/some_name/
I got this issue because the /var/lib/postgresql/data/postgresql.conf and the /var/lib/postgresql/data overlap in the docker container at /var/lib/postgresql/.
An example of a broken config is:
version: "3.8"
services:
db:
image: "postgres:10"
ports:
- "5432:5432"
volumes:
- ./postgresql.conf:/var/lib/postgresql/data/postgresql.conf
- ./pg-data:/var/lib/postgresql/data
To avoid this I tell PostgreSQL to find it's config in /etc/postgresql.conf instead so the overlapping volumes don't occur like this:
version: "3.8"
services:
db:
image: "postgres:10"
command: ["postgres", "-c", "config_file=/etc/postgresql.conf"]
ports:
- "5432:5432"
volumes:
- ./postgresql.conf:/etc/postgresql.conf
- ./pg-data:/var/lib/postgresql/data
This is similar to what pmsoltani suggests, but I move the location of the postgresql.conf file instead of the data directory.
I had the same issue today, I fixed it by removing the content of the volume db_data in your case
docker volume ls
docker volume inspect db_data <-- will show you the mountpoint
I went to the directory (mountpoint) ex: /path/data
cp data data.backup
cd data
rm -R *
And start services:
docker-compose up -d
Another solution is to simply remove the volume attached to the container:
docker-compose down -v

Postgres on Docker in Azure

I try to run a Postgres Docker container in an Azure Web App.
When i try to mount a volume to the Data folder, i get the error: FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
my Compose script:
version: "3"
services:
db:
image: postgres:11.2
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
- POSTGRES_DB=test
volumes:
- ${WEBAPP_STORAGE_HOME}/data:/var/lib/postgresql/data
ports:
- "5433:5432"
Docker host is set to linux.
how can i get around this issue?
(if i dont, the data is lost every restart / container update)
For this issue, you can take a look at the Dockerfile of the image postgres. There is a step that changes the ownership of the directory /var/lib/postgresql with the command:
chown -R postgres:postgres /var/lib/postgresql
But when you use the persistent storage in Azure Web App, you cannot change the permission of it. So it causes the error:
FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
See more details about the limitation here.

Error starting postgres container - mkdir: Permission denied

I am starting a postgres container using following docker-compose.yml
version: '3'
services:
db:
image: postgres:latest
container_name: postgres
environment:
POSTGRES_USER: usr
POSTGRES_PASSWORD: pswd
POSTGRES_DB: db
PGDATA: /var/lib/postgresql/data/pgdata
ports:
- 5432:5432
volumes:
- nfs_cur_dir:/var/lib/postgresql/data
volumes:
nfs_cur_dir:
driver: local
driver_opts:
type: nfs
o: "addr=10.15.187.88,rw"
device: ":/u/uname/home/database"
I am getting following error when starting the container
$sudo ./docker-compose up db
Starting postgres ... done
Attaching to postgres
postgres | mkdir: cannot create directory ‘/var/lib/postgresql/data’: Permission denied
postgres exited with code 1
The permissions on database directory are 777
drwxrwxrwx 3 uname grpname 4096 May 5 22:57 database
After the failure I also see pgdata directory created as this -
drwx------ 2 polkitd root 4096 May 5 22:57 pgdata
Note:
The data directory for the postgres is mapped to an NFS location. Hence I have defined a new NFS volume in the docker-compose and mapped that to the postgres container.
I am using PGDATA env variable to define a different location for the data directory.
Other than above two things there is nothing out of ordinary. If I use a local drive location for the data directory this works fine !
You should check the permissions that NFS share exposes.
According to what you said, if you use a local drive it works fine. That's why I think the NFS share's permissions aren't working as you expect.
Maybe you should create the directory before trying to run your application.

permission denied for mongodb container error while running from docker_compose file

Using Windows 10 Pro.
This is one of the services under my docker_compose.yml file.
version: '3'
networks:
demo-net:
services:
mongodb:
image: mongo:latest
container_name: mongodb
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
MONGO_INITDB_DATABASE: admin
ports:
- 27017:27017
volumes:
- ./mongo_data:/data/db
networks:
- demo-net
When I am doing docker_compose up in vs code, I am getting this error
mongodb | 2020-05-07T16:53:34.336+0000 W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version.
mongodb | 2020-05-07T16:53:34.337+0000 F STORAGE [initandlisten] Reason: 1: Operation not permitted
mongodb | 2020-05-07T16:53:34.337+0000 F - [initandlisten] Fatal Assertion 28595 at src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 915
mongodb | 2020-05-07T16:53:34.337+0000 F - [initandlisten]
mongodb |
mongodb | ***aborting after fassert() failure
mongodb |
mongodb |
mongodb exited with code 14
Could anyone tell me what I am doing wrong?
This same piece of code is working on a friend's mac.
Will keeping MongoDb in my local as well as using it in the container will cause any problem?
EDIT:
Your issue is container / host permissions. This answer might help.
ORIGINAL ANSWER:
Will keeping MongoDb in my local as well as using it in the container will cause any problem?
Yes, if you are running on the same default port (27017). You can avoid this by mapping a different local port, e.g. if your docker compose:
ports:
- 27018:27017
Then connect to mongo on port 27018 for the container.

chown: changing ownership of '/data/db': Operation not permitted

Can we use nfs volume plugin to maintain the High Availability and Disaster Recovery among the kubernetes cluster?
I am running the pod with MongoDB. Getting the error
chown: changing ownership of '/data/db': Operation not permitted .
Cloud any body, Please suggest me how to resolve the error? (or)
Is any alternative volume plugin is suggestible to achieve HA- DR in kubernetes cluster?
chown: changing ownership of '/data/db': Operation not permitted .
You'll want to either launch the mongo container as root, so that you can chown the directory, or if the image prohibits it (as some images already have a USER mongo clause that prohibits the container from escalating privileges back up to root), then one of two things: supersede the user with a securityContext stanza in containers: or use an initContainer: to preemptively change the target folder to be the mongo UID:
Approach #1:
containers:
- name: mongo
image: mongo:something
securityContext:
runAsUser: 0
(which may require altering your cluster's config to permit such a thing to appear in a PodSpec)
Approach #2 (which is the one I use with Elasticsearch images):
initContainers:
- name: chmod-er
image: busybox:latest
command:
- /bin/chown
- -R
- "1000" # or whatever the mongo UID is, use string "1000" not 1000 due to yaml
- /data/db
volumeMounts:
- name: mongo-data # or whatever
mountPath: /data/db
containers:
- name: mongo # then run your container as before
/data/db is a mountpoint, even if you don't explicitly mount a volume there. The data is persisted to an overlay specific to the pod.
Kubernetes mounts all volumes as 0755 root.root, regardless of what the permissions for the directory were intially.
Of course mongo cannot chown that.
If you mount the volume somewhere below /data/db, you will get the same error.
And if you mount the volume above at /data, the data will not be stored on the NFS because the mountpoint at /data/db will write to the overlay instead. But you won't get that error anymore.
By adding command:["mongod"] in your Deployment Manifest, it will override the default entrypoint script and will prevent executing the chown.
...
spec:
containers:
- name: mongodb
image: mongo:4.4.0-bionic
command: ["mongod"]
...
Instead of mounting /data/db, we could mount /data. Internally mongo will create /data/db. During entrypoint, mongo tries to chown this directory but if we mount a volume directory to this mount point, as a mongo container user - it will not be able to chown. That's the cause of the issue
Here is a sample of working mongo deployment yaml
...
spec:
containers:
- name: mongo
image: mongo:latest
volumeMounts:
- mountPath: /data
name: mongo-db-volume
volumes:
- hostPath:
path: /Users/name/mongo-data
type: Directory
name: mongo-db-volume
...