Running mongorestore failed which never happened to me before. This time I am restoring on a primary server of a two nodes replica configuration:
bash-4.1$ mongorestore mas
connected to: 127.0.0.1
2015-01-16T02:01:50.692-0500 mas/app.bson
2015-01-16T02:01:50.692-0500 going into namespace [mas.app]
1 objects found
2015-01-16T02:01:50.692-0500 Creating index: { key: { _id: 1 }, name: "_id_", ns: "mas.app" }
Error creating index mas.app: 18825 err: "couldn't create file /mongoDB-data/mas.ns"
Aborted
The result of df -h shows there are still a lot of spaces:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 50G 3.2G 44G 7% /
tmpfs 3.6G 0 3.6G 0% /dev/shm
/dev/xvdb 50G 4.4G 43G 10% /mongoDB-data
the original files in /mongoDB-data:
bash-4.1$ ls -l /mongoDB-data/
total 4356132
drwxr-xr-x. 2 mongod mongod 4096 Jan 16 01:42 journal
-rw-------. 1 mongod mongod 67108864 Jan 16 01:42 local.0
-rw-------. 1 mongod mongod 2146435072 Sep 4 20:06 local.1
-rw-------. 1 mongod mongod 2146435072 Sep 4 20:06 local.2
-rw-------. 1 mongod mongod 16777216 Jan 16 01:42 local.ns
drwx------. 2 root root 16384 Sep 17 01:49 lost+found
-rwxr-xr-x. 1 mongod mongod 6 Jan 16 01:42 mongod.lock
-rw-------. 1 mongod mongod 67108864 Sep 4 20:06 test.0
-rw-------. 1 mongod mongod 16777216 Sep 4 20:06 test.ns
Related
I want to know why my mongo docker not writing data to local foler. I run my mongo docker with this command:
(/data/db seems to be the mongdo docker's data storage position, and /data/docker/mongo_volume is the folder in the "host" machine)
sudo docker run -it -v /data/db:/data/docker/mongo_volume -d mongo
When the mongo docker successfully started in my host , $docker ps looks like good:
u#VM-0-9-ubuntu:/data/docker/mongo_volume$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5848da7562a3 mongo "docker-entrypoint.s…" 10 minutes ago Up 10 minutes 0.0.0.0:27017->27017/tcp, :::27017->27017/tcp sleepy_clarke
and $ docker inspect <container_id> shows the mounted volume:
"Mounts": [
{
"Type": "bind",
"Source": "/data/db",
"Destination": "/data/docker/mongo_volume",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
and I check the dockers (in docker shell) /data/db folder, everything looks good:
ls -al
total 716
drwxr-xr-x 4 mongodb mongodb 4096 Mar 25 00:38 .
drwxr-xr-x 1 root root 4096 Mar 25 00:28 ..
-rw------- 1 mongodb mongodb 50 Mar 25 00:28 WiredTiger
-rw------- 1 mongodb mongodb 21 Mar 25 00:28 WiredTiger.lock
-rw------- 1 mongodb mongodb 1472 Mar 25 00:38 WiredTiger.turtle
-rw------- 1 mongodb mongodb 94208 Mar 25 00:38 WiredTiger.wt
-rw------- 1 mongodb mongodb 4096 Mar 25 00:28 WiredTigerHS.wt
-rw------- 1 mongodb mongodb 36864 Mar 25 00:34 _mdb_catalog.wt
-rw------- 1 mongodb mongodb 20480 Mar 25 00:29 collection-0--6476415430291015248.wt
-rw------- 1 mongodb mongodb 65536 Mar 25 00:34 collection-11--6476415430291015248.wt
-rw------- 1 mongodb mongodb 20480 Mar 25 00:29 collection-2--6476415430291015248.wt
-rw------- 1 mongodb mongodb 4096 Mar 25 00:28 collection-4--6476415430291015248.wt
-rw------- 1 mongodb mongodb 20480 Mar 25 00:33 collection-7--6476415430291015248.wt
-rw------- 1 mongodb mongodb 225280 Mar 25 00:33 collection-9--6476415430291015248.wt
drwx------ 2 mongodb mongodb 4096 Mar 25 00:39 diagnostic.data
-rw------- 1 mongodb mongodb 20480 Mar 25 00:29 index-1--6476415430291015248.wt
-rw------- 1 mongodb mongodb 73728 Mar 25 00:33 index-10--6476415430291015248.wt
-rw------- 1 mongodb mongodb 20480 Mar 25 00:34 index-12--6476415430291015248.wt
-rw------- 1 mongodb mongodb 20480 Mar 25 00:29 index-3--6476415430291015248.wt
-rw------- 1 mongodb mongodb 4096 Mar 25 00:28 index-5--6476415430291015248.wt
-rw------- 1 mongodb mongodb 4096 Mar 25 00:28 index-6--6476415430291015248.wt
-rw------- 1 mongodb mongodb 20480 Mar 25 00:33 index-8--6476415430291015248.wt
drwx------ 2 mongodb mongodb 4096 Mar 25 00:28 journal
-rw-r--r-- 1 root root 0 Mar 25 00:29 lueluelue
-rw------- 1 mongodb mongodb 2 Mar 25 00:28 mongod.lock
-rw------- 1 mongodb mongodb 36864 Mar 25 00:35 sizeStorer.wt
-rw------- 1 mongodb mongodb 114 Mar 25 00:28 storage.bson
However, here comes the problem: I found there's nothing in my "host machine"'s /data/docker/mongo_volume:
ubuntu#VM-0-9-ubuntu:/data/docker/mongo_volume$ ll
total 8
drwxr-xr-x 2 root root 4096 Mar 20 13:46 ./
drwxr-xr-x 3 root root 4096 Mar 20 13:46 ../
So anyone could give me a clue? thanks a lot!
your docker command is incorrect, you should place -v <host_folder>:<container_folder>, e.g.
sudo docker run -it -v /data/docker/mongo_volume:/data/db -d mongo
I don't understand what are the 40,41,42,43 wal sequence files in my pg_wal directory?
According to the PostgreSQL documentation, segment files are given ever-increasing numbers as names, starting at 000000010000000000000001.
Why do the 4x segment files not appear in my backup? Why is there a 3F after 43?
# ls -clt /var/lib/pgsql/13/data/pg_wal/
total 81924
-rw-------. 1 postgres postgres 16777216 jan 29 10.45 00000001000000000000003F
-rw-------. 1 postgres postgres 16777216 jan 29 10.18 000000010000000000000043
drwx------. 2 postgres postgres 59 jan 29 10.18 <font color="#3465A4"><b>archive_status</b></font>
-rw-------. 1 postgres postgres 16777216 jan 29 09.18 000000010000000000000042
-rw-------. 1 postgres postgres 16777216 jan 29 08.18 000000010000000000000041
-rw-------. 1 postgres postgres 16777216 jan 29 07.43 000000010000000000000040
-rw-------. 1 postgres postgres 340 jan 27 19.13 000000010000000000000020.00000060.backup
My backup server wasn't available between 0:13 and 7:29.
My backup directory:
# ls -clt /home/pgbackup/wal/
total 524292
-rw------- 1 pgbackup pgbackup 16777216 jan 29 10:13 00000001000000000000003E
-rw------- 1 pgbackup pgbackup 16777216 jan 29 09:13 00000001000000000000003D
-rw------- 1 pgbackup pgbackup 16777216 jan 29 08:13 00000001000000000000003C
-rw------- 1 pgbackup pgbackup 16777216 jan 29 07:29 00000001000000000000003B
-rw------- 1 pgbackup pgbackup 16777216 jan 29 07:29 00000001000000000000003A
-rw------- 1 pgbackup pgbackup 16777216 jan 29 07:29 000000010000000000000039
-rw------- 1 pgbackup pgbackup 16777216 jan 29 00:13 000000010000000000000038
-rw------- 1 pgbackup pgbackup 16777216 jan 28 23:13 000000010000000000000037
-rw------- 1 pgbackup pgbackup 16777216 jan 28 22:13 000000010000000000000036
-rw------- 1 pgbackup pgbackup 16777216 jan 28 21:14 000000010000000000000035
-rw------- 1 pgbackup pgbackup 16777216 jan 28 20:13 000000010000000000000034
-rw------- 1 pgbackup pgbackup 16777216 jan 28 19:13 000000010000000000000033
-rw------- 1 pgbackup pgbackup 16777216 jan 28 18:13 000000010000000000000032
-rw------- 1 pgbackup pgbackup 16777216 jan 28 17:13 000000010000000000000031
-rw------- 1 pgbackup pgbackup 16777216 jan 28 16:13 000000010000000000000030
-rw------- 1 pgbackup pgbackup 16777216 jan 28 15:13 00000001000000000000002F
-rw------- 1 pgbackup pgbackup 16777216 jan 28 14:30 00000001000000000000002E
-rw------- 1 pgbackup pgbackup 16777216 jan 28 14:30 00000001000000000000002D
-rw------- 1 pgbackup pgbackup 16777216 jan 28 14:30 00000001000000000000002C
-rw------- 1 pgbackup pgbackup 16777216 jan 28 14:30 00000001000000000000002B
-rw------- 1 pgbackup pgbackup 16777216 jan 28 10:13 00000001000000000000002A
-rw------- 1 pgbackup pgbackup 16777216 jan 28 09:13 000000010000000000000029
-rw------- 1 pgbackup pgbackup 16777216 jan 28 08:13 000000010000000000000028
-rw------- 1 pgbackup pgbackup 16777216 jan 28 07:13 000000010000000000000027
-rw------- 1 pgbackup pgbackup 16777216 jan 28 06:46 000000010000000000000026
-rw------- 1 pgbackup pgbackup 16777216 jan 28 06:46 000000010000000000000025
-rw------- 1 pgbackup pgbackup 16777216 jan 28 06:46 000000010000000000000024
-rw------- 1 pgbackup pgbackup 16777216 jan 27 22:13 000000010000000000000023
-rw------- 1 pgbackup pgbackup 16777216 jan 27 21:13 000000010000000000000022
-rw------- 1 pgbackup pgbackup 16777216 jan 27 20:13 000000010000000000000021
-rw------- 1 pgbackup pgbackup 340 jan 27 19:13 000000010000000000000020.00000060.backup
The life cycle of WAL segments is:
They are created ahead of time, so that there are always some in stock to cope with activity spikes. This process is driven by database activity and min_wal_size.
At some point, the WAL segment becomes active and is written to.
When the segment is full or something else happens that causes a WAL switch, the next segment becomes active. If archiving is configured, the old WAL segment is archived.
Once the segment is archived and no longer needed for anything else (wal_keep_size, replication slot), it is removed at the next checkpoint. This can happen in two ways:
the old segment is deleted
the old segment is renamed and enters the cycle again at step 1
This is determined by max_wal_size, min_wal_size and database activity.
In your case, 00000001000000000000003F is the currently active WAL segment (it has the latest modification timestamp and a low number). 000000010000000000000040 to 000000010000000000000043 are the reserve for the future. 000000010000000000000021 to 00000001000000000000003E have been completed, archived and removed.
Never look at the timestamp to determine the order of WAL segments, it is all in the name.
This is documented:
The system physically divides this sequence into WAL segment files, which are normally 16MB apiece (although the segment size can be altered during initdb). The segment files are given numeric names that reflect their position in the abstract WAL sequence. When not using WAL archiving, the system normally creates just a few segment files and then “recycles” them by renaming no-longer-needed segment files to higher segment numbers. It's assumed that segment files whose contents precede the last checkpoint are no longer of interest and can be recycled.
The parameters are also documented. Beyond a certain degree of detail, resort to the (well documented open) source and its README files.
I just upgraded from mongodb 3.4 to 4.4, after that the database won't start.
As a service.....
[root#ssdnodes-54313 mongo]# systemctl restart mongod
Job for mongod.service failed because the control process exited with error code.
See "systemctl status mongod.service" and "journalctl -xe" for details.
-- Unit mongod.service has begun starting up.
Dec 09 15:30:22 ssdnodes-54313 mongod[217641]: about to fork child process, waiting until server is ready for connections.
Dec 09 15:30:22 ssdnodes-54313 mongod[217641]: forked process: 217643
Dec 09 15:30:22 ssdnodes-54313 mongod[217641]: ERROR: child process failed, exited with 1
Dec 09 15:30:22 ssdnodes-54313 mongod[217641]: To see additional information in this output, start without the "--fork" option.
Dec 09 15:30:22 ssdnodes-54313 systemd[1]: mongod.service: Control process exited, code=exited status=1
Dec 09 15:30:22 ssdnodes-54313 systemd[1]: mongod.service: Failed with result 'exit-code'.
Dec 09 15:30:22 ssdnodes-54313 systemd[1]: Failed to start MongoDB Database Server.
Executing mongod ....
[root#host mongo]# mongod --fork --logpath /var/log/mongodb/mongod.log
about to fork child process, waiting until server is ready for connections.
forked process: 217394
ERROR: child process failed, exited with 100
To see additional information in this output, start without the "--fork" option.
Executing without --fork (does nothing, no error, no server listening)
[root#host mongo]# mongod --logpath /var/log/mongodb/mongod.log
[root#host mongo]#
If I start as :
[root#host mongo]# mongod --dbpath /var/lib/mongo
{"t":{"$date":"2020-12-09T15:28:43.674+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2020-12-09T15:28:43.687+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2020-12-09T15:28:43.691+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2020-12-09T15:28:43.698+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":217522,"port":27017,"dbPath":"/var/lib/mongo","architecture":"64-bit","host":"ssdnodes-54313"}}
Do works! but how do I fork it to run in background and start as a service at boot-time.
Edit:
my /etc/mongod.conf
[root#ssdnodes-54313 ~]# cat /etc/mongod.conf
# mongod.conf
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# Where and how to store data.
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
File permissions:
[root#ssdnodes-54313 lib]# ls -la /var/lib
drwxr-xr-x 4 mongod mongod 4096 Dec 9 18:50 mongo
[root#ssdnodes-54313 lib]# ls -la /var/lib/mongo/
total 556
drwxr-xr-x 4 mongod mongod 4096 Dec 9 18:50 .
drwxr-xr-x. 40 root root 4096 Dec 9 14:01 ..
-rw------- 1 mongod mongod 32768 Dec 9 15:32 collection-0--1818581548198400736.wt
-rw------- 1 mongod mongod 36864 Dec 9 15:33 collection-0--4356046170403439820.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 collection-0-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 collection-10-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 collection-12-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 collection-14-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 collection-16-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 collection-18-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 collection-20-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 collection-22-7358854442417001382.wt
-rw------- 1 mongod mongod 24576 Dec 9 16:23 collection-2--4356046170403439820.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:36 collection-24-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:36 collection-26-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 collection-2-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 collection-4-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 collection-6-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 collection-8-7358854442417001382.wt
drwx------ 2 mongod mongod 4096 Dec 9 18:50 diagnostic.data
-rw------- 1 mongod mongod 4096 Dec 9 15:34 index-11-7358854442417001382.wt
-rw------- 1 mongod mongod 32768 Dec 9 15:32 index-1--1818581548198400736.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 index-13-7358854442417001382.wt
-rw------- 1 mongod mongod 36864 Dec 9 15:33 index-1--4356046170403439820.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 index-15-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 index-1-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 index-17-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 index-19-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 index-21-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:35 index-23-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:36 index-25-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:36 index-27-7358854442417001382.wt
-rw------- 1 mongod mongod 12288 Dec 9 16:24 index-3--4356046170403439820.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 index-3-7358854442417001382.wt
-rw------- 1 mongod mongod 12288 Dec 9 18:50 index-4--4356046170403439820.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 index-5-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 index-7-7358854442417001382.wt
-rw------- 1 mongod mongod 4096 Dec 9 15:34 index-9-7358854442417001382.wt
drwx------ 2 mongod mongod 4096 Dec 9 15:32 journal
-rw------- 1 mongod mongod 36864 Dec 9 15:36 _mdb_catalog.wt
-rw------- 1 mongod mongod 7 Dec 9 15:32 mongod.lock
-rw------- 1 mongod mongod 36864 Dec 9 16:23 sizeStorer.wt
-rw------- 1 mongod mongod 114 Dec 9 15:17 storage.bson
-rw------- 1 mongod mongod 47 Dec 9 15:17 WiredTiger
-rw------- 1 mongod mongod 4096 Dec 9 15:32 WiredTigerHS.wt
-rw------- 1 mongod mongod 21 Dec 9 15:17 WiredTiger.lock
-rw------- 1 mongod mongod 1256 Dec 9 18:50 WiredTiger.turtle
-rw------- 1 mongod mongod 143360 Dec 9 18:50 WiredTiger.wt
SELinux is disabled...
[root#ssdnodes-54313 log]# sestatus
SELinux status: disabled
I am desperately trying to get a Docker project I have inherited up and running, and Docker is giving me no end of problems. When trying to start up my containers I get the following error on my Postgresql container:
FATAL: "/var/lib/postgresql/data" is not a valid data directory
DETAIL: File "/var/lib/postgresql/data/PG_VERSION" does not contain valid data.
HINT: You might need to initdb.
The project is a Rails project using Redis, ElasticSearch, and Sidekiq containers as well - those all load fine.
docker-compose.yml:
postgres:
image: postgres:9.6.2
environment:
POSTGRES_USER: $PG_USER
POSTGRES_PASSWORD: $PG_PASS
ports:
- '5432:5432'
volumes:
- postgres:/var/lib/postgresql/data
/var/lib/postgresql/data is owned by the postgres user (as it should be I believe) and the postgresql service starts up and runs fine on its own.
I have tried running initdb from the /usr/lib/postgresql/9.6/bin directory, as well as from docker (from docker it doesn't seem to persist or even create anything... if anyone knows why I would be interested in knowing)
The contents of the /var/lib/postgresql/data directory:
drwxrwxrwx 19 postgres postgres 4096 Jun 28 20:41 .
drwxr-xr-x 5 postgres postgres 4096 Jun 28 20:41 ..
drwx------ 5 postgres postgres 4096 Jun 28 20:41 base
drwx------ 2 postgres postgres 4096 Jun 28 20:41 global
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_clog
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_commit_ts
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_dynshmem
-rw------- 1 postgres postgres 4468 Jun 28 20:41 pg_hba.conf
-rw------- 1 postgres postgres 1636 Jun 28 20:41 pg_ident.conf
drwx------ 4 postgres postgres 4096 Jun 28 20:41 pg_logical
drwx------ 4 postgres postgres 4096 Jun 28 20:41 pg_multixact
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_notify
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_replslot
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_serial
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_snapshots
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_stat
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_stat_tmp
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_subtrans
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_tblspc
drwx------ 2 postgres postgres 4096 Jun 28 20:41 pg_twophase
-rw------- 1 postgres postgres 4 Jun 28 20:41 PG_VERSION
drwx------ 3 postgres postgres 4096 Jun 28 20:41 pg_xlog
-rw------- 1 postgres postgres 88 Jun 28 20:41 postgresql.auto.conf
-rw------- 1 postgres postgres 22267 Jun 28 20:41 postgresql.conf
PG_VERSION contains 9.6
Any help is much appreciated.
you're changing default postgresql data path hence you need to initialize the database. try this
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
here is the init.sql file
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
I had the same issue, I restarted Docker daemon and even restarted the machine, but didn't fix the issue.
The path /var/lib/postgresql/data was not even on the FS.
Note: Performing a docker ps was not showing the postgresql container as running.
Solution (docker-compose):
Close postgres container:
docker-compose -f <path_to_my_docker_compose_file.yaml> down postgres
Start postgres container:
docker-compose -f <path_to_my_docker_compose_file.yaml> -d up postgres
-- That did the trick! --
Solution (docker):
docker stop <postgres_container>
docker start <postgres_container>
Another solution that you can try:
initdb <temporary_volum_folder>
Example:
initdb /tmp/postgres
docker-compose -f <path_to_my_docker_compose_file.yaml> -d up postgres
or
initdb /tmp/postgres
docker start <postgres_container>
Note:
In my case the postgres image is defined in docker-compose.yaml file, and it can be observed I don't define PG_DAT nor PG_VERSION and the container runs ok.
postgres:
image: postgres:9.6.14
container_name: postgres
environment:
POSTGRES_USER: 'pg12345678'
POSTGRES_PASSWORD: 'pg12345678'
ports:
- "5432:5432"
So when have postgres:/var/lib/postgresql/data, it's going to mount /var/lib/postgresql/data to a docker data volume called postgres. Docker data volumes are all stored together in a location that varies depending on the OS.
Try changing it to ./postgres to have it create a directory called postgres relative to your working directory.
Since the source is changing it will recreate the database, and I'd be willing to be fix the error your seeing. If not, it could be a permission issue on the host os.
I have created simple image based on mongo:latest. My Dockerfile is just
FROM mongo:3.3
MAINTAINER developer#encode.cz
Now when I run it by cmd docker run my-mongo mongod I get no /data/db error. But there is clearly RUN mkdir /data/db in mongo base image. Also pure mongo base image works as expected.
Why is this folder not present in my custom image if it is in the base image?
I think you have problems in the way you are testing or I don't understand well your question. I tested the official image:
docker run -d --name mongo mongo:3.3 mongod
docker exec -it mongo bash -c 'ls -la /data/db'
total 192
drwxr-xr-x 4 mongodb mongodb 4096 Oct 28 18:11 .
drwxr-xr-x 4 root root 4096 Oct 21 20:47 ..
-rw-r--r-- 1 mongodb mongodb 46 Oct 28 17:56 WiredTiger
-rw-r--r-- 1 mongodb mongodb 21 Oct 28 17:56 WiredTiger.lock
-rw-r--r-- 1 mongodb mongodb 935 Oct 28 18:11 WiredTiger.turtle
-rw-r--r-- 1 mongodb mongodb 40960 Oct 28 18:11 WiredTiger.wt
-rw-r--r-- 1 mongodb mongodb 4096 Oct 28 17:56 WiredTigerLAS.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 17:57 _mdb_catalog.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 17:57 collection-0--3585910680230311914.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 17:57 collection-2--3585910680230311914.wt
drwxr-xr-x 2 mongodb mongodb 4096 Oct 28 18:11 diagnostic.data
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 17:57 index-1--3585910680230311914.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 17:57 index-3--3585910680230311914.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 17:57 index-4--3585910680230311914.wt
drwxr-xr-x 2 mongodb mongodb 4096 Oct 28 17:56 journal
-rw-r--r-- 1 mongodb mongodb 2 Oct 28 17:56 mongod.lock
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 17:57 sizeStorer.wt
-rw-r--r-- 1 mongodb mongodb 95 Oct 28 17:56 storage.bson
Then I created a Dockerfile with your two lines and:
docker build -t my-mongo .
docker run -d --name my-mongo my-mongo mongod
docker exec -it my-mongo bash -c 'ls -la /data/db'
total 192
drwxr-xr-x 4 mongodb mongodb 4096 Oct 28 18:12 .
drwxr-xr-x 4 root root 4096 Oct 21 20:47 ..
-rw-r--r-- 1 mongodb mongodb 46 Oct 28 18:06 WiredTiger
-rw-r--r-- 1 mongodb mongodb 21 Oct 28 18:06 WiredTiger.lock
-rw-r--r-- 1 mongodb mongodb 932 Oct 28 18:12 WiredTiger.turtle
-rw-r--r-- 1 mongodb mongodb 40960 Oct 28 18:12 WiredTiger.wt
-rw-r--r-- 1 mongodb mongodb 4096 Oct 28 18:06 WiredTigerLAS.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 18:07 _mdb_catalog.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 18:07 collection-0-683121925029568227.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 18:07 collection-2-683121925029568227.wt
drwxr-xr-x 2 mongodb mongodb 4096 Oct 28 18:13 diagnostic.data
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 18:07 index-1-683121925029568227.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 18:07 index-3-683121925029568227.wt
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 18:07 index-4-683121925029568227.wt
drwxr-xr-x 2 mongodb mongodb 4096 Oct 28 18:06 journal
-rw-r--r-- 1 mongodb mongodb 2 Oct 28 18:06 mongod.lock
-rw-r--r-- 1 mongodb mongodb 16384 Oct 28 18:07 sizeStorer.wt
-rw-r--r-- 1 mongodb mongodb 95 Oct 28 18:06 storage.bson
Be aware, the /data/db directory is declared as a volume. If you are having problems withn that, restart the docker daemon and check your available disk space df -h
Regards