initdb: could not change permissions of directory on Postgresql container - postgresql

I am new to the docker ecosystem and I am trying to spin up a simple postgres container along with a volume so it persists its data, by using a yaml composer file. The file is as follows:
# Use postgres/example user/password credentials
version: '3.3'
services:
db:
image: postgres
environment:
POSTGRES_DB: recrow
POSTGRES_USER: recrow
POSTGRES_PASSWORD: recrow_db_1000
PGDATA: /var/lib/pgsql/data/pgdata
volumes:
- ./pgsql/data:/var/lib/pgsql/data/pgdata
However, upon calling docker-compose -f stack.yml up I get the following error:
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... initdb: could not change
permissions of directory "/var/lib/postgresql/data/pgdata": Operation
not permitted
/var/lib/pgsql/data/pgdata is supposed to be a directory relative to the container's root, while ./pgsql/data is a path on the host. I am running the container from an ntfs-3g partition mounted on /mnt/storage. What could be the problem? I am also running docker without root permissions, by adding my user to the docker group and this user also has full access to the beforementioned mount point /mnt/storage.

I'm guessing this is going to be an incompatibility with ntfs-3g. The PostgreSQL image contains an entrypoint script that is doing some permission changes on container start: https://github.com/docker-library/postgres/blob/972294a377463156c8d61297320c872fc7d370a9/9.6/docker-entrypoint.sh#L32-L38. I found another relevant question at https://askubuntu.com/questions/11840/how-do-i-use-chmod-on-an-ntfs-or-fat32-partition that talks about being able to set permissions at mount time. But not being able to change via chmod or chown (which is likely the reason for the failure in this case).
Unfortunately, I think the answer here is that you cannot use ntfs-3g safely for backing Docker host volume mounts.

Following off of #liam-mitchell's note above, that is the answer. Use named volumes such like the following:
services:
db:
image: postgres:12-alpine
volumes:
- "postgres:/data/postgres"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/data/postgres
...
volumes:
postgres:

I work with OpenShift and had the same problem to run this official image from Docker Hub.
In my case, the solution was to use the official postgres image from red hat repository, the image from red hat repository has fixed this problem, this is can be an alternative.

I had the same issue with docker on WSL2. Setting the :Z flag for the mount and not mounting to a Windows file system directory (/mnt/*) but a linux directory (/home/*) worked for me.
my compose:
version: '3.3'
services:
postgres:
container_name: dbs2-postgres
environment:
- POSTGRES_PASSWORD=mysecretpassword
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- './data:/var/lib/postgresql/data:Z'
image: postgres

Related

How do I delete Postgres docker volume?

I have a docker-compose.yml which I am using to deploy to a remote host from my local (Mac) machine using docker context. The compose config is as follows:
database:
image: postgres:14.2
restart: on-failure
volumes:
- ./db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
In order to persist data, I have defined a volume ./db-data:/var/lib/postgresql/data. This db-data folder does not exist in my local machine. I want delete this mount completely because I don't want any of the previously persisted data. I know I can define a new volume directory but I would like to use the same directory name (db-data). I have tried the following:
docker compose down --volume --remove-orphans - when I recreate new container, previously persisted data still exists
There is no folder called ./db-data in my Mac working directory.
I tried searching var/lib/docker in my Mac. But that directory does not exists.
Docker for Mac app doesn't list any volumes
There is no db-data in the remote host where the database is deployed
Running docker inspect <container-id> listed the mount directory for the container. The mount directory resembled absolute path of my local computer. For example it was like /Users/<user-name>/dir/db-data. When I saw this I assumed this had to be in the local computer due to the prefix Users/<user-name> but this path was actually found in the root of the remote machine.
Thats because the directory for docker volumes is in the docker vm for MACOS.
Where is /var/lib/docker on Mac/OS X
You would have to follow this to see the volume

Can't keep postgres data persistent using Github CodeSpaces with Docker-Compose

I set up a Github codespaces environment using devcontainer.json and docker-compose.yaml. Everything works fine, but the postgres database defined in docker-compose.yml loses its data every time the container needs to be re-built.
Here's the bottom part of the docker-compose.yml
db:
image: postgres:latest
restart: unless-stopped
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: test_user
POSTGRES_DB: test_db
POSTGRES_PASSWORD: test_pass
volumes:
postgres-data:
as you can see, I am trying to map the postgres data volume into a postgres-data volume, but this doesn't work for some reason.
What am I doing wrong that's preventing postgres data from persisting between container builds?
Another option would be to look into using Spawn. (Disclaimer - I'm one of the devs working on it).
We've written some documentation about exactly how to use Spawn-hosted databases with GitHub codespaces here: https://docs.spawn.cc/blog/2021/08/01/spawn-and-codespaces
This will allow you to provision a database thats independent from the GitHub codespace and preserve data between restarts.
You get some extra features with Spawn like arbitrary save points, resets and loading back to saved revisions with Spawn - but the key functionality of spinning up a database for a GitHub codespace and preserving data is one of the things it works extremely well for.
according to https://docs.github.com/en/codespaces/customizing-your-codespace/configuring-codespaces-for-your-project#dockerfile ,
only docker images can be pulled from source and set-up, nowhere they mention that volume persistence is guaranteed.
and after going through this https://code.visualstudio.com/docs/remote/devcontainerjson-reference looks like mounts and few other features related to volumes are not supported for codespaces.
workspaceMount : Not yet supported in Codespaces or when using Clone Repository in Container Volume.
workaround :
in .devcontainer folder where your dockerfile is present add a line like this
RUN curl https://<your_public_cloud>/your_volume.vol -O
here <your_public_cloud> can be google drive, aws or any endpoint where you have access to download the volume. its also the volume you needed to be persist.
and once its downloaded you can mount the volume to postgres service or make a hotswap.
and when you want to save, just upload the volume to your cloud storage provider.
repeat the process every time you build, and save and upload before "unbuild" or dismissing your codespace whatever you like to call.
hope that eases your issue, happy coding!
As long as you don't remove the volume with docker-compose down --volumes as an example, the data should persist.
I had the same issue; and it turned out that I had put a crontab running docker system prune -af every 15 minutes!
You could just mount a host directory, instead of using a docker volume:
volumes:
- /home/me/postgres_data:/var/lib/postgresql/data
This guarantees that no volume cleanup (accidental or deliberate) nukes your database.
Indeed the postgres docs do this in their examples. See the PGDATA environment variable.
As you don't have access to VM, maybe the directory containing your docker-compose.yml changes.
In that case, volume name may change too.
Indeed, by default, your volume name would be the following :
<directory_name>_postgres-data
Could you try a named volume (starting with compose 3.4):
db:
image: postgres:latest
restart: unless-stopped
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: test_user
POSTGRES_DB: test_db
POSTGRES_PASSWORD: test_pass
volumes:
postgres-data:
external: false
name: postgres-data
documentation of docker-compose can be found here :
https://docs.docker.com/compose/compose-file/compose-file-v3/#name
EDIT 1
If your VM is created at each build, docker dependencies too.
volumes, networks, etc...
A persistent volume is needed somewhere (surviving VM builds).
You may have to create a directory in your local workspace, like:
/local/workspace/postgres-data/
which become in codespaces according to my understanding :
./postgres-data
Check permissions, your user may not exist in the container.
As a result your compose file become:
db:
image: postgres:latest
restart: unless-stopped
volumes:
- ./postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: test_user
POSTGRES_DB: test_db
POSTGRES_PASSWORD: test_pass

How to make sure docker-compose will not remove my volume with postgres data

I am running a simple django webapp with docker-compose. I define both a web service and a db service in a docker-compose.yml file:
version: "3.8"
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
env_file:
- ./.env.dev
depends_on:
- db
volumes:
postgres_data:
I start the service by running:
docker-compose up -d
I can load some data in there with a custom django command that I wrote for my app. Everything is running fine (with data) on localhost:8000.
However, when I run
docker-compose down
(so without -v) and then again
docker-compose up -d
the database is empty again. The volume was not persisted. From what I read in the docker-compose docs and also in several posts here at SO, persisting the volume and reusing it when you start a new container should be the default behavior (which, if I understand it correctly, you can disable by using the --renew-anon-volumes flag).
However in my case, the volume is not persisted. Or maybe it is, but my data is gone.
By doing docker volume ls I can see that my volume (I'll use the name my_volume here) still exists after the docker-compose down command. However, the CreatedAt value has been changed. This makes me think it's a different volume with the same name, and my data is already gone, but I don't know how to confirm that.
This SO answer suggests to mount the volume on /var/lib/postgresql instead of /var/lib/postgresql/data. However, I've seen other resources (like this one) where the opposite is suggested. I've tried both, but neither option works.
Thanks for any advice.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.

docker-compose on Windows volume not working

I've been playing with Docker for the past week and think the container idea is very useful, but despite reading everything I can for the past 3 days I can't get the volume mapping to work
get docker-compose to use my existing volume.
Docker Version: 18.03.1-ce
docker-compose version 1.21.1, build 7641a569
I created a volume using the following via a Dockerfile
# Reference SQL image
FROM microsoft/mssql-server-windows-developer
# Create directory within SQL container for database files mapped to the volume
VOLUME sqldata:c:/MSSQL
and here it shows:
C:\ProgramData\Docker\volumes>docker volume ls
local sqldata
Now I've tried probably 60+ different "solutions" based on StackOverflow and Docker forums, but none of them work. (Note despite the names below with Azure I am simply trying to get this to run locally, Azure is next hurdle)
Docker-compose.yaml:
version: '3.4'
services:
ws:
image: wsManager
container_name: azure-wcf
ports:
- "80"
depends_on:
- db
db:
image: dbimage:latest
container_name: azure-db
volumes:
- \sqldata:/mssql
# - type: volume
# source: sqldata
# target: /mssql
ports:
- "1433"
I've added a volumes section but it does not help,
volumes:
sqldata:
external:
name: sqldata
changed the - \sqldata:/mssql
to every possible slash .. . ~ whatever. Moved the file to yaml file
to C:\ProgramData\Docker\volumes - basically any suggestion that showed in my search results. The dbImage is a SQL Server image that I need to persist the data from but am wondering what the magic is as nothing I've tried works. Any help is GREATLY appreciated.
I'm running on Windows 10 Pro build 1803.
Why does this have to be so hard?
Than you to whomever knows how to make this actually work.
The solution is to reference the true path on Windows using the volumes: option as below:
sqldb:
image: sqlimage
container_name: azure-db
volumes:
- "C:\\ProgramData\\Docker\\volumes\\sqldata:c:\\mssql"
To persist the data I used the following:
environment:
- "sa_password=ddsql2017##"
- "ACCEPT_EULA=Y"
- 'attach_dbs= {"dbName":"MyDb","dbFiles":"C:\\MSSQL\\MyDb.mdf","C:\\MSSQL\\MyDb.ldf"]}]'
Hope this helps someone else as many of the examples I found searching both on SO and elsewhere did not work for me, and in the Docker forums there are a lot of posts saying mounting volumes not work for Windows.
For those who are using Ubunto WSL:
sudo mkdir /c
sudo mount --bind /mnt/c /c
navigate to your project file use new path ( /c/your-project-path and not /mnt/c/your-project-path)
edit your docker-compose.yml and use relative path for volume : ( like ./src instead of c/your-project-path/src)
docker-compose up
I was struggling with a similar problem when trying to mount a volume to a specific path of my Windows machine: basically it didn't work so every time I restarted my Docker instance I lose all my DB data.
I finally found out that it is because Docker for Windows by default cannot interpret Windows path so the flag COMPOSE_CONVERT_WINDOWS_PATHS has to be activated. To do so:
Run the command "set COMPOSE_CONVERT_WINDOWS_PATHS=1"
Restart Docker
Go to Settings > Shared Drives > Reset credentials and then select drive and then apply
From the command line, kill the containers (docker container rm -f )
Re-run the containers
Hope it helps
If your windows account credentials has been changed, you also have to reset credentials for shared drives. (Settings > Shared Drives > Reset credentials)
In my case, the password was changed by my company security policy.
Are you sure you really need to map to a certain host directory? If not, my solution is to create a volume beforehand and use it in docker-compose.yaml. I use the same scripts for both windows and linux. That is the beauty of docker.
Here is what I did to start both postgres and mysql:
create_db.sh (you can run it in git bash or similiar environment in windows):
docker volume create --name postgres-data -d local
docker volume create --name mysql-data -d local
docker-compose up -d
docker-compose.yaml:
version: '3'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_DB: datasource
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: 'train'
MYSQL_USER: 'mysql'
MYSQL_PASSWORD: 'mysql'
MYSQL_ROOT_PASSWORD: 'mysql'
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
volumes:
postgres-data:
external: true
mysql-data:
external: true
By default it looks that after installing Docker on Windows, sharing of drivers is disabled - so you won't be able to use volumes(that are stored on disks)
Enabling such sharing, through: Docker in tray - right click - Settings, helped to me, volumes started working fine.
Docker on Windows is having strange behavior as Windows has limitations with credentials and also with the virtual machine that Docker is using(Hyper-V , VirtualBox - depending on your Docker version and setup).
Basically, you are correct to map a folder in
volumes:
section in your service:
The path is
version: '3.4'
services:
db:
image: dbimage:latest
container_name: azure-db
volumes:
- c:/Temp/sqldata:/mssql
Important is that you do not need to explicitly create volume in volumes section, but the docker-compose up will create it(the same is for docker run).
Strange thing is that it will never show up in
docker volume ls
but it will be usable with the same files inside windows directory and inside container path /mssql
You can test it with:
docker run --rm -v c:/Temp/sqldata:/data alpine ls /data
or
docker run --rm -v c:/Temp:/data alpine ls /data
If it Disappear, probably it lost the credentials and Reset it via Docker->Settings->Shared Drives->Reset credentials.
I hope it was clear and covered all the aspects for you.
Launch Docker from your windows taskbar
Click on Settings icon on top
Click Resources
Click File Sharing
Click on (+) sign and add path of local folder in which you want to map the container volume.
It worked for me.

How to store MongoDB data with docker-compose

I have this docker-compose:
version: "2"
services:
api:
build: .
ports:
- "3007:3007"
links:
- mongo
mongo:
image: mongo
volumes:
- /data/mongodb/db:/data/db
ports:
- "27017:27017"
The volumes, /data/mongodb/db:/data/db, is the first part (/data/mongodb/db) where the data is stored inside the image and the second part (/data/db) where it's stored locally?
It works on production (ubuntu) but when i run it on my dev-machine (mac) I get:
ERROR: for mongo Cannot start service mongo: error while creating mount source path '/data/mongodb/db': mkdir /data/mongodb: permission denied
Even if I run it as sudo. I've added the /data directory in the "File Sharing"-section in the docker-program on the mac.
Is the idea to use the same docker-compose on both production and development? How do I solve this issue?
Actually it's the other way around (HOST:CONTAINER), /data/mongodb/db is on your host machine and /data/db is in the container.
You have added the /data in the shared folders of your dev machine but you haven't created /data/mongodb/db, that's why you get a permission denied error. Docker doesn't have the rights to create folders.
I get the impression you need to learn a little bit more about the fundamentals of Docker to fully understand what you are doing. There are a lot of potential pitfalls running Docker in production, and my recommendation is to learn the basics really well so you know how to handle them.
Here is what the documentation says about volumes:
[...] specify a path on the host machine (HOST:CONTAINER)
So you have it the wrong way around. The first part is the past on the host, e.g. your local machine, and the second is where the volume is mounted within the container.
Regarding your last question, have a look at this article: Using Compose in production.
Since Docker-Compose syntax version 3.2, you can use a long syntax of the volume property to specify the type of volume. This allows you to create a "Bind" volume, which effectively links a folder from a container to a folder in your host.
Here is an example :
version : "3.2"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- type: bind
source: /data
target: /data/db
ports:
- "42421:27017"
source is the folder in your host and target the folder in your container
More information avaliable here : https://docs.docker.com/compose/compose-file/#long-syntax