My docker compose file has three containers, web, nginx, and postgres. Postgres looks like this:
postgres:
container_name: postgres
restart: always
image: postgres:latest
volumes:
- ./database:/var/lib/postgresql
ports:
- 5432:5432
My goal is to mount a volume which corresponds to a local folder called ./database inside the postgres container as /var/lib/postgres. When I start these containers and insert data into postgres, I verify that /var/lib/postgres/data/base/ is full of the data I'm adding (in the postgres container), but in my local system, ./database only gets a data folder in it, i.e. ./database/data is created, but it's empty. Why?
Notes:
This suggests my above file should work.
This person is using docker services which is interesting
UPDATE 1
Per Nick's suggestion, I did a docker inspect and found:
"Mounts": [
{
"Source": "/Users/alex/Documents/MyApp/database",
"Destination": "/var/lib/postgresql",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Name": "e5bf22471215db058127109053e72e0a423d97b05a2afb4824b411322efd2c35",
"Source": "/var/lib/docker/volumes/e5bf22471215db058127109053e72e0a423d97b05a2afb4824b411322efd2c35/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
Which makes it seem like the data is being stolen by another volume I didn't code myself. Not sure why that is. Is the postgres image creating that volume for me? If so, is there some way to use that volume instead of the volume I'm mounting when I restart? Otherwise, is there a good way of disabling that other volume and using my own, ./database?
Strangely enough, the solution ended up being to change
volumes:
- ./postgres-data:/var/lib/postgresql
to
volumes:
- ./postgres-data:/var/lib/postgresql/data
You can create a common volume for all Postgres data
docker volume create pgdata
or you can set it to the compose file
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgress
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- suruse
volumes:
pgdata:
It will create volume name pgdata and mount this volume to container's path.
You can inspect this volume
docker volume inspect pgdata
// output will be
[
{
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/pgdata/_data",
"Name": "pgdata",
"Options": {},
"Scope": "local"
}
]
I would avoid using a relative path. Remember that docker is a daemon/client relationship.
When you are executing the compose, it's essentially just breaking down into various docker client commands, which are then passed to the daemon. That ./database is then relative to the daemon, not the client.
Now, the docker dev team has some back and forth on this issue, but the bottom line is it can have some unexpected results.
In short, don't use a relative path, use an absolute path.
I think you just need to create your volume outside docker first with a docker create -v /location --name and then reuse it.
And by the time I used to use docker a lot, it wasn't possible to use a static docker volume with dockerfile definition so my suggestion is to try the command line (eventually with a script ) .
Related
I am creating an Azure Function that must be connected to a local storage account. It's for study purpose. The problem does not exists if I run the function with "default" options, the ones are set when I create an Azure function that connect to a containerized local storage.
But now I want to customize my project using the docker compose. Forget the function, In this moment is not a problem and I don't care about it. Here the compose file:
version: '3.4'
services:
functionapp4:
image: ${DOCKER_REGISTRY-}functionapp4
container_name: MyFunction
build:
context: .
dockerfile: FunctionApp4/Dockerfile
storage:
image: mcr.microsoft.com/azure-storage/azurite
container_name: MyStorage
restart: always
ports:
- 127.0.0.1:10000:10000
- 127.0.0.1:10001:10001
- 127.0.0.1:10002:10002
environment:
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
volumes:
- azurite:/data
volumes:
azurite:
When I run the project, both the containers (function and storage) start. But here I can see immediately a problem:
the services have been started at http://0.0.0.0 even if I set 127.0.0.1 in the compose file. I also tried with "127.0.0.1:{portNumber}"
Now, I open the Storage Explorer, where I created the storage with the same name and key I set in the compose:
Now, when I click on queue I get this error:
{
"name": "RestError",
"message": "Invalid storage account.\nRequestId:a20dea2a-2535-4098-950e-33a7f44ceca1\nTime:2023-02-08T07:36:52.554Z",
"code": "InvalidOperation",
"statusCode": 400,
"request": {
"streamResponseStatusCodes": {},
"url": "http://127.0.0.1:10001/devst*****?timeout=30",
...
}
}
I also tried to set the command in docker compose file:
command: 'azurite'
In this case, the service starts listening at the correct host, but it is worst because I get the error I cannot connect to the storge account either:
The problem seems to be in my environment variable:
environment:
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
But it is correclty set:
I tryed both with quotation marks and without them. No change
If I remove the env variable, I can connect to the default storage account correctly.
What's wrong in my configuration? Any suggestion please?
Thank you
Just one small error in my configuration.
This line
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
must be
- "AZURITE_ACCOUNTS=devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
Please, note the quotation marks.
I'm trying to backup/restore a docker volume. This volume is a named volume attached to a postgres:10 db image. The volume is seeded via a pg_dump/restore. I'm using docker-compose and here is what it looks like:
services:
db:
image: postgres:10
ports:
- '5432:5432'
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
driver: local
localstack-data:
driver: local
Here is what the mounted volume looks like on the db service:
"Mounts": [
{
"Type": "volume",
"Name": "postgres-data",
"Source": "/var/lib/docker/volumes/postgres-data/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
]
Now, when I say "backup/restore", I mean I want to be able to have a backup of the data inside that volume so that after I have messed all the data up, I can simply just replace it with the backup and have fresh data. A simple "restore w/ snapshot" type of action. I'm following the documentation found on docker.
To test it:
I add a user to my postgres-data
Stop my container docker stop $(docker ps -q)
Perform backup (db_1 is container name): docker run --rm --volumes-from db_1 -v $(pwd):/backup busybox tar cvf /backup/backup.tar /var/lib/postgresql/data
Delete user after backup is complete
Here is an example of the output of backup command that follows w/ NO errors:
tar: removing leading '/' from member names
var/lib/postgresql/data/
var/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/
var/lib/postgresql/data/lib/postgresql/
var/lib/postgresql/data/lib/postgresql/data/
var/lib/postgresql/data/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/postgresql/data/lib/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/pg_stat/
Once done, tar file gets created. Now, maybe the error is with the path? If you look at the output I'm seeing lib/postgresql/data nested over and over. Not sure if that is an okay result or not.
Once done, I perform the "restore": docker run --rm --volumes-from db_1 -v $(pwd):/backup busybox sh -c "cd /var/lib/postgresql/data && tar xvf /backup/backup.tar --strip 1"
What I expect: To see the deleted user back in the db. Basically, to see the same data that was in the volume when I performed the backup.
What I'm seeing: the data is not getting restored. Still same data. No user restored and any data that has been changed stays changed.
Again, perhaps what I'm doing is the wrong thing or "overkill" for a simple desire to hot swap out used volumes with a fresh, untouched one. Any ideas would be helpful from debugging or a better approach.
I set up a Github codespaces environment using devcontainer.json and docker-compose.yaml. Everything works fine, but the postgres database defined in docker-compose.yml loses its data every time the container needs to be re-built.
Here's the bottom part of the docker-compose.yml
db:
image: postgres:latest
restart: unless-stopped
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: test_user
POSTGRES_DB: test_db
POSTGRES_PASSWORD: test_pass
volumes:
postgres-data:
as you can see, I am trying to map the postgres data volume into a postgres-data volume, but this doesn't work for some reason.
What am I doing wrong that's preventing postgres data from persisting between container builds?
Another option would be to look into using Spawn. (Disclaimer - I'm one of the devs working on it).
We've written some documentation about exactly how to use Spawn-hosted databases with GitHub codespaces here: https://docs.spawn.cc/blog/2021/08/01/spawn-and-codespaces
This will allow you to provision a database thats independent from the GitHub codespace and preserve data between restarts.
You get some extra features with Spawn like arbitrary save points, resets and loading back to saved revisions with Spawn - but the key functionality of spinning up a database for a GitHub codespace and preserving data is one of the things it works extremely well for.
according to https://docs.github.com/en/codespaces/customizing-your-codespace/configuring-codespaces-for-your-project#dockerfile ,
only docker images can be pulled from source and set-up, nowhere they mention that volume persistence is guaranteed.
and after going through this https://code.visualstudio.com/docs/remote/devcontainerjson-reference looks like mounts and few other features related to volumes are not supported for codespaces.
workspaceMount : Not yet supported in Codespaces or when using Clone Repository in Container Volume.
workaround :
in .devcontainer folder where your dockerfile is present add a line like this
RUN curl https://<your_public_cloud>/your_volume.vol -O
here <your_public_cloud> can be google drive, aws or any endpoint where you have access to download the volume. its also the volume you needed to be persist.
and once its downloaded you can mount the volume to postgres service or make a hotswap.
and when you want to save, just upload the volume to your cloud storage provider.
repeat the process every time you build, and save and upload before "unbuild" or dismissing your codespace whatever you like to call.
hope that eases your issue, happy coding!
As long as you don't remove the volume with docker-compose down --volumes as an example, the data should persist.
I had the same issue; and it turned out that I had put a crontab running docker system prune -af every 15 minutes!
You could just mount a host directory, instead of using a docker volume:
volumes:
- /home/me/postgres_data:/var/lib/postgresql/data
This guarantees that no volume cleanup (accidental or deliberate) nukes your database.
Indeed the postgres docs do this in their examples. See the PGDATA environment variable.
As you don't have access to VM, maybe the directory containing your docker-compose.yml changes.
In that case, volume name may change too.
Indeed, by default, your volume name would be the following :
<directory_name>_postgres-data
Could you try a named volume (starting with compose 3.4):
db:
image: postgres:latest
restart: unless-stopped
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: test_user
POSTGRES_DB: test_db
POSTGRES_PASSWORD: test_pass
volumes:
postgres-data:
external: false
name: postgres-data
documentation of docker-compose can be found here :
https://docs.docker.com/compose/compose-file/compose-file-v3/#name
EDIT 1
If your VM is created at each build, docker dependencies too.
volumes, networks, etc...
A persistent volume is needed somewhere (surviving VM builds).
You may have to create a directory in your local workspace, like:
/local/workspace/postgres-data/
which become in codespaces according to my understanding :
./postgres-data
Check permissions, your user may not exist in the container.
As a result your compose file become:
db:
image: postgres:latest
restart: unless-stopped
volumes:
- ./postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: test_user
POSTGRES_DB: test_db
POSTGRES_PASSWORD: test_pass
I want to clarify the meaning of the "device" and "mountpoint" when I do the command
docker volume inspect
in Postgres container. I manually created test_postgresdb_vol_2 folder in /user/data/test_postgresdb_vol_2 to store persist the data from container, but now I'm confusing since I have two different paths. Can you clarify what is happening and what is the meaning of
"device" path and "mountpoint" path.
Example of volume inspect:
[
{
"CreatedAt": "...",
"Driver": "local",
"Labels": {
....
},
"Mountpoint": "/var/lib/docker/volumes/test_pgdata/_data",
"Name": "test_pgdata",
"Options": {
"device": "/user/data/test_postgresdb_vol_2",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
Example of docker-compose:
postgres:
container_name: postgres
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
environment:
...
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
pgdata:
driver: local
driver_opts:
o: bind
type: none
device: /user/data/test_postgresdb_vol_2
Those details in the docker volume inspect output are implementation details that can be safely ignored.
Internally, the current standard implementation of Docker named volumes gives them a filesystem presence inside /var/lib/docker/volumes. In this case, you've told Docker that the volume should actually be created via the mount(2) system call, and more specifically as a bind-type mount. The options you see could be parameters to mount(8)
/sbin/mount -o bind $DEVICE $MOUNT_POINT
You might notice that the Driver and Options match things you've specified directly in the docker-compose.yml file, pgdata matches the name of the volume, test matches the name of the current directory (and more specifically the Compose project name, should you override that), and test_pgdata where it appears is a combination of the two.
None of this matters to standard application code. From the docker-compose file you've shown, you declare that the named volume is local and backed by a specific host directory, and it mounts into the postgres container on a specific path. The inspect-type commands produce low-level debugging data that you almost never need.
I want to use Dockerizing MongoDB and store data in local volume.
But .. failed ...
It has mongo:latest images
kerydeMacBook-Pro:~ hu$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mongo latest b11eedbc330f 2 weeks ago 317.4 MB
ubuntu latest 6cc0fc2a5ee3 3 weeks ago 187.9 MB
I want to store the mono data in ~/data. so ---
kerydeMacBook-Pro:~ hu$ docker run -p 27017:27017 -v ~/data:/data/db --name mongo -d mongo
f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43
But ... it not work...
docker ps -- no daemon mongo
kerydeMacBook-Pro:~ hu$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
try to run "mongo" --failed
kerydeMacBook-Pro:~ hu$ docker exec -it f57 bash
Error response from daemon: Container f57 is not running
docker inspect mongo
kerydeMacBook-Pro:~ hu$ docker inspect mongo
[
{
"Id": "f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43",
"Created": "2016-02-15T02:19:01.617824401Z",
"Path": "/entrypoint.sh",
"Args": [
"mongod"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 100,
"Error": "",
"StartedAt": "2016-02-15T02:19:01.74102535Z",
"FinishedAt": "2016-02-15T02:19:01.806376434Z"
},
"Mounts": [
{
"Source": "/Users/hushuming/data",
"Destination": "/data/db",
"Mode": "",
"RW": true
},
{
"Name": "365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a",
"Source": "/mnt/sda1/var/lib/docker/volumes/365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a/_data",
"Destination": "/data/configdb",
"Driver": "local",
"Mode": "",
"RW": true
}
],
If I do not set data volume, mongo image can work!
But, when setting data volume, it can't. Who can help me?
Try and check docker logs to see what was going on when the container stopped and go in "Existed" mode.
See also if specifying the full path for the volume would help:
docker run -p 27017:27017 -v /home/<user>/data:/data/db ...
The OP adds:
docker logs mongo
exception in initAndListen: 98
Unable to create/open lock file: /data/db/mongod.lock
errno:13 Permission denied
Is a mongod instance already running?
terminating 2016-02-15T06:19:17.638+0000
I CONTROL [initandlisten] dbexit: rc: 100
An errno:13 is what issue 30 is about.
This comment adds:
It's a file ownership/permission issue (not related to this docker image), either using boot2docker with VB or a vagrant box with VB.
Nevertheless, I managed to hack the ownership, remounting the /Users shared volume inside boot2docker to uid 999 and gid 999 (which are what mongo docker image uses) and got it to start:
$ boot2docker ssh
$ sudo umount /Users
$ sudo mount -t vboxsf -o uid=999,gid=999 Users /Users
But... mongod crashes due to filesystem type not being supported (mmap not working on vboxsf)
So the actual solution would be to try a DVC: Data Volume Container, because right now the mongodb doc mentions:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this
operation.
So:
the mounting to OSX will not work for MongoDB because of the way that virtualbox shared folders work.
For a DVC (Data Volume Container), try docker volume create:
docker volume create mongodbdata
Then use it as:
docker run -p 27017:27017 -v mongodbdata:/data/db ...
And see if that works better.
As I mention in the comments:
A docker volume inspect mongodbdata (see docker volume inspect) will give you its path (that you can then backup if you need)
Per Docker Docs:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
docker volume create mongodbdata
docker run -p 27017:27017 -v mongodbdata:/data/db mongo
Via Docker Compose:
version: '2'
services:
mongodb:
image: mongo:latest
volumes:
- ./<your-local-path>:/data/db
/data/db is the location of the data saved on the container.
<your-local-path> is the location on your machine AKA the host machine where the actual database journal files will be saved.
For anyone running MongoDB container on Windows: as described here, there's an issue when you mount volume from Windows host to MongoDB container using path (we call this local volume).
You can overcome the issue using a Docker volume (volume managed by Docker):
docker volume create mongodata
Or using docker-compose as my preference:
version: "3.4"
services:
....
db:
image: mongo
volumes:
- mongodata:/data/db
restart: unless-stopped
volumes:
mongodata:
Tested on Windows 10 and it works
Found a link: VirtualBox Shared Folders are not supported by mongodb.