I want to clarify the meaning of the "device" and "mountpoint" when I do the command
docker volume inspect
in Postgres container. I manually created test_postgresdb_vol_2 folder in /user/data/test_postgresdb_vol_2 to store persist the data from container, but now I'm confusing since I have two different paths. Can you clarify what is happening and what is the meaning of
"device" path and "mountpoint" path.
Example of volume inspect:
[
{
"CreatedAt": "...",
"Driver": "local",
"Labels": {
....
},
"Mountpoint": "/var/lib/docker/volumes/test_pgdata/_data",
"Name": "test_pgdata",
"Options": {
"device": "/user/data/test_postgresdb_vol_2",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
Example of docker-compose:
postgres:
container_name: postgres
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
environment:
...
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
pgdata:
driver: local
driver_opts:
o: bind
type: none
device: /user/data/test_postgresdb_vol_2
Those details in the docker volume inspect output are implementation details that can be safely ignored.
Internally, the current standard implementation of Docker named volumes gives them a filesystem presence inside /var/lib/docker/volumes. In this case, you've told Docker that the volume should actually be created via the mount(2) system call, and more specifically as a bind-type mount. The options you see could be parameters to mount(8)
/sbin/mount -o bind $DEVICE $MOUNT_POINT
You might notice that the Driver and Options match things you've specified directly in the docker-compose.yml file, pgdata matches the name of the volume, test matches the name of the current directory (and more specifically the Compose project name, should you override that), and test_pgdata where it appears is a combination of the two.
None of this matters to standard application code. From the docker-compose file you've shown, you declare that the named volume is local and backed by a specific host directory, and it mounts into the postgres container on a specific path. The inspect-type commands produce low-level debugging data that you almost never need.
Related
I am creating an Azure Function that must be connected to a local storage account. It's for study purpose. The problem does not exists if I run the function with "default" options, the ones are set when I create an Azure function that connect to a containerized local storage.
But now I want to customize my project using the docker compose. Forget the function, In this moment is not a problem and I don't care about it. Here the compose file:
version: '3.4'
services:
functionapp4:
image: ${DOCKER_REGISTRY-}functionapp4
container_name: MyFunction
build:
context: .
dockerfile: FunctionApp4/Dockerfile
storage:
image: mcr.microsoft.com/azure-storage/azurite
container_name: MyStorage
restart: always
ports:
- 127.0.0.1:10000:10000
- 127.0.0.1:10001:10001
- 127.0.0.1:10002:10002
environment:
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
volumes:
- azurite:/data
volumes:
azurite:
When I run the project, both the containers (function and storage) start. But here I can see immediately a problem:
the services have been started at http://0.0.0.0 even if I set 127.0.0.1 in the compose file. I also tried with "127.0.0.1:{portNumber}"
Now, I open the Storage Explorer, where I created the storage with the same name and key I set in the compose:
Now, when I click on queue I get this error:
{
"name": "RestError",
"message": "Invalid storage account.\nRequestId:a20dea2a-2535-4098-950e-33a7f44ceca1\nTime:2023-02-08T07:36:52.554Z",
"code": "InvalidOperation",
"statusCode": 400,
"request": {
"streamResponseStatusCodes": {},
"url": "http://127.0.0.1:10001/devst*****?timeout=30",
...
}
}
I also tried to set the command in docker compose file:
command: 'azurite'
In this case, the service starts listening at the correct host, but it is worst because I get the error I cannot connect to the storge account either:
The problem seems to be in my environment variable:
environment:
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
But it is correclty set:
I tryed both with quotation marks and without them. No change
If I remove the env variable, I can connect to the default storage account correctly.
What's wrong in my configuration? Any suggestion please?
Thank you
Just one small error in my configuration.
This line
- AZURITE_ACCOUNTS="devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
must be
- "AZURITE_ACCOUNTS=devst******:Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
Please, note the quotation marks.
I'm trying to backup/restore a docker volume. This volume is a named volume attached to a postgres:10 db image. The volume is seeded via a pg_dump/restore. I'm using docker-compose and here is what it looks like:
services:
db:
image: postgres:10
ports:
- '5432:5432'
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
driver: local
localstack-data:
driver: local
Here is what the mounted volume looks like on the db service:
"Mounts": [
{
"Type": "volume",
"Name": "postgres-data",
"Source": "/var/lib/docker/volumes/postgres-data/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
]
Now, when I say "backup/restore", I mean I want to be able to have a backup of the data inside that volume so that after I have messed all the data up, I can simply just replace it with the backup and have fresh data. A simple "restore w/ snapshot" type of action. I'm following the documentation found on docker.
To test it:
I add a user to my postgres-data
Stop my container docker stop $(docker ps -q)
Perform backup (db_1 is container name): docker run --rm --volumes-from db_1 -v $(pwd):/backup busybox tar cvf /backup/backup.tar /var/lib/postgresql/data
Delete user after backup is complete
Here is an example of the output of backup command that follows w/ NO errors:
tar: removing leading '/' from member names
var/lib/postgresql/data/
var/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/
var/lib/postgresql/data/lib/postgresql/
var/lib/postgresql/data/lib/postgresql/data/
var/lib/postgresql/data/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/postgresql/data/lib/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/pg_stat/
Once done, tar file gets created. Now, maybe the error is with the path? If you look at the output I'm seeing lib/postgresql/data nested over and over. Not sure if that is an okay result or not.
Once done, I perform the "restore": docker run --rm --volumes-from db_1 -v $(pwd):/backup busybox sh -c "cd /var/lib/postgresql/data && tar xvf /backup/backup.tar --strip 1"
What I expect: To see the deleted user back in the db. Basically, to see the same data that was in the volume when I performed the backup.
What I'm seeing: the data is not getting restored. Still same data. No user restored and any data that has been changed stays changed.
Again, perhaps what I'm doing is the wrong thing or "overkill" for a simple desire to hot swap out used volumes with a fresh, untouched one. Any ideas would be helpful from debugging or a better approach.
I have the following docker-compose file
version: '3.7'
volumes:
postgres-data:
services:
postgres:
environment:
- POSTGRES_PASSWORD=mypwd
- POSTGRES_USER=randomuser
image: 'postgres:14'
restart: always
volumes:
- './postgres-data:/var/lib/postgresql/data'
I seem to have multiple issues regarding the volume:
A folder named postgres-data is created in the docker-compose file location when I run up, though it seems that for other images, they get placed in the /var/lib/docker/volumes folder instead (without creating such a folder). Is this expected ? Is it a good practice to have the volume folder created in the same location as the docker-compose file, instead of the /var/lib/docker/volumes folder ?
This folder has weird ownership, I can't get into it as my current user (though I am in the docker group).
I tried reading the image documentation, especially the "Arbitrary --user Notes", but didn't understand what to do with it. I also tried not setting the POSTGRES_USER (which then defaults to postgres), but the result is the same.
What's the correct way to create a volume using this image ?
Your volume mount is explicitly to a subdirectory of the current directory
volumes:
- './postgres-data:/var/lib/postgresql/data'
# ^^ (a slash before the colon always means a bind mount)
If you want to use a named volume you need to declare that at the top level of the Compose file, and refer to the volume name (without a slash) when you use it
volumes:
postgres-data:
services:
...
volumes:
- 'postgres-data:/var/lib/postgresql/data'
# ^^ (no slash)
One isn't really "better" than the other for this case. A bind-mounted host directory is much easier to back up; a named volume will be noticeably faster on MacOS or Windows; you can directly see and edit the files with a bind mount; you can use the Docker ecosystem to clean up named volumes. For a database in particular, seeing the data files isn't very useful and I might prefer a named volume, but that's not at all a strong preference.
File ownership for bind mounts is a frequent question. On native Linux, the numeric user ID is the only thing that matters for permission checking. This is resolved by the /etc/passwd file into a username, but the host and container have different copies of this file (and that's okay). The unusual owner you're seeing with ls -l from the host matches the numeric uid of the default user in the postgres image.
That image is well-designed, though, and the upshot of the section in the Docker Hub documentation is that you can specify any Compose user: you want, probably matching the host uid owning the directory.
sudo rm -rf ./postgres-data # with the wrong owner
id -u # what's my current numeric uid?
version: '3.8'
services:
postgres:
volumes: # using a host directory
- './postgres-data:/var/lib/postgresql/data'
user: 1000 # matches the `id -u` output
My docker compose file has three containers, web, nginx, and postgres. Postgres looks like this:
postgres:
container_name: postgres
restart: always
image: postgres:latest
volumes:
- ./database:/var/lib/postgresql
ports:
- 5432:5432
My goal is to mount a volume which corresponds to a local folder called ./database inside the postgres container as /var/lib/postgres. When I start these containers and insert data into postgres, I verify that /var/lib/postgres/data/base/ is full of the data I'm adding (in the postgres container), but in my local system, ./database only gets a data folder in it, i.e. ./database/data is created, but it's empty. Why?
Notes:
This suggests my above file should work.
This person is using docker services which is interesting
UPDATE 1
Per Nick's suggestion, I did a docker inspect and found:
"Mounts": [
{
"Source": "/Users/alex/Documents/MyApp/database",
"Destination": "/var/lib/postgresql",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Name": "e5bf22471215db058127109053e72e0a423d97b05a2afb4824b411322efd2c35",
"Source": "/var/lib/docker/volumes/e5bf22471215db058127109053e72e0a423d97b05a2afb4824b411322efd2c35/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
Which makes it seem like the data is being stolen by another volume I didn't code myself. Not sure why that is. Is the postgres image creating that volume for me? If so, is there some way to use that volume instead of the volume I'm mounting when I restart? Otherwise, is there a good way of disabling that other volume and using my own, ./database?
Strangely enough, the solution ended up being to change
volumes:
- ./postgres-data:/var/lib/postgresql
to
volumes:
- ./postgres-data:/var/lib/postgresql/data
You can create a common volume for all Postgres data
docker volume create pgdata
or you can set it to the compose file
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgress
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- suruse
volumes:
pgdata:
It will create volume name pgdata and mount this volume to container's path.
You can inspect this volume
docker volume inspect pgdata
// output will be
[
{
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/pgdata/_data",
"Name": "pgdata",
"Options": {},
"Scope": "local"
}
]
I would avoid using a relative path. Remember that docker is a daemon/client relationship.
When you are executing the compose, it's essentially just breaking down into various docker client commands, which are then passed to the daemon. That ./database is then relative to the daemon, not the client.
Now, the docker dev team has some back and forth on this issue, but the bottom line is it can have some unexpected results.
In short, don't use a relative path, use an absolute path.
I think you just need to create your volume outside docker first with a docker create -v /location --name and then reuse it.
And by the time I used to use docker a lot, it wasn't possible to use a static docker volume with dockerfile definition so my suggestion is to try the command line (eventually with a script ) .
I want to use Dockerizing MongoDB and store data in local volume.
But .. failed ...
It has mongo:latest images
kerydeMacBook-Pro:~ hu$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mongo latest b11eedbc330f 2 weeks ago 317.4 MB
ubuntu latest 6cc0fc2a5ee3 3 weeks ago 187.9 MB
I want to store the mono data in ~/data. so ---
kerydeMacBook-Pro:~ hu$ docker run -p 27017:27017 -v ~/data:/data/db --name mongo -d mongo
f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43
But ... it not work...
docker ps -- no daemon mongo
kerydeMacBook-Pro:~ hu$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
try to run "mongo" --failed
kerydeMacBook-Pro:~ hu$ docker exec -it f57 bash
Error response from daemon: Container f57 is not running
docker inspect mongo
kerydeMacBook-Pro:~ hu$ docker inspect mongo
[
{
"Id": "f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43",
"Created": "2016-02-15T02:19:01.617824401Z",
"Path": "/entrypoint.sh",
"Args": [
"mongod"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 100,
"Error": "",
"StartedAt": "2016-02-15T02:19:01.74102535Z",
"FinishedAt": "2016-02-15T02:19:01.806376434Z"
},
"Mounts": [
{
"Source": "/Users/hushuming/data",
"Destination": "/data/db",
"Mode": "",
"RW": true
},
{
"Name": "365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a",
"Source": "/mnt/sda1/var/lib/docker/volumes/365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a/_data",
"Destination": "/data/configdb",
"Driver": "local",
"Mode": "",
"RW": true
}
],
If I do not set data volume, mongo image can work!
But, when setting data volume, it can't. Who can help me?
Try and check docker logs to see what was going on when the container stopped and go in "Existed" mode.
See also if specifying the full path for the volume would help:
docker run -p 27017:27017 -v /home/<user>/data:/data/db ...
The OP adds:
docker logs mongo
exception in initAndListen: 98
Unable to create/open lock file: /data/db/mongod.lock
errno:13 Permission denied
Is a mongod instance already running?
terminating 2016-02-15T06:19:17.638+0000
I CONTROL [initandlisten] dbexit: rc: 100
An errno:13 is what issue 30 is about.
This comment adds:
It's a file ownership/permission issue (not related to this docker image), either using boot2docker with VB or a vagrant box with VB.
Nevertheless, I managed to hack the ownership, remounting the /Users shared volume inside boot2docker to uid 999 and gid 999 (which are what mongo docker image uses) and got it to start:
$ boot2docker ssh
$ sudo umount /Users
$ sudo mount -t vboxsf -o uid=999,gid=999 Users /Users
But... mongod crashes due to filesystem type not being supported (mmap not working on vboxsf)
So the actual solution would be to try a DVC: Data Volume Container, because right now the mongodb doc mentions:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this
operation.
So:
the mounting to OSX will not work for MongoDB because of the way that virtualbox shared folders work.
For a DVC (Data Volume Container), try docker volume create:
docker volume create mongodbdata
Then use it as:
docker run -p 27017:27017 -v mongodbdata:/data/db ...
And see if that works better.
As I mention in the comments:
A docker volume inspect mongodbdata (see docker volume inspect) will give you its path (that you can then backup if you need)
Per Docker Docs:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
docker volume create mongodbdata
docker run -p 27017:27017 -v mongodbdata:/data/db mongo
Via Docker Compose:
version: '2'
services:
mongodb:
image: mongo:latest
volumes:
- ./<your-local-path>:/data/db
/data/db is the location of the data saved on the container.
<your-local-path> is the location on your machine AKA the host machine where the actual database journal files will be saved.
For anyone running MongoDB container on Windows: as described here, there's an issue when you mount volume from Windows host to MongoDB container using path (we call this local volume).
You can overcome the issue using a Docker volume (volume managed by Docker):
docker volume create mongodata
Or using docker-compose as my preference:
version: "3.4"
services:
....
db:
image: mongo
volumes:
- mongodata:/data/db
restart: unless-stopped
volumes:
mongodata:
Tested on Windows 10 and it works
Found a link: VirtualBox Shared Folders are not supported by mongodb.