i am doing some postgres deployment with docker, ansible and terraform in aws
things are going relatively well, i start the instance with terraform, provision the instance with docker using ansible, start my postgres container with ansible also, and attach a ebs volume to my instance, which i intend to use as the main data storage.
but i am confused as to how to attach the volume to the docker (not to the instance as i am able to do that using terraform)
i imagine it is possible using ansible or modifiying the dockerfile, but the documentation of the "volume" which seems to be the answer is not that clear to me.
so if i had an ansible playbook like this:
name: Start postgis
docker_container:
name: postgis
image: "{{ ecr_url }}"
network_mode: bridge
exposed_ports:
5432
published_ports:
5432:5432
state: started
how would i specify the ebs volume to be used for the data storage of Postgres?
resource "aws_volume_attachment" "ebs-volume-postgis-attach" {
device_name = "/dev/xvdh"
volume_id = "${aws_ebs_volume.ebs-volume-postgis.id}"
instance_id = "${aws_instance.postgis.id}"
}
that was the code used to attach the ebs volume, in case someone is interested
please ask any kind of info that you need, all help is deeply apreciated
Here is a checklist:
Attach EBS volume (disk) to EC2 instance (e.g. /dev/xvdh)
Make partition (optional) (e.g. /dev/xvdh1)
Make filesystem on the partition/disk
Mount filesystem inside your EC2 instance (e.g. /opt/ebs_data)
Start Docker-container with volume (e.g. /opt/ebs_data:/var/lib/postgresql/data)
In Ansible's docker_container module, volumes is a list, so:
- docker_container:
name: postgis
image: "{{ ecr_url }}"
network_mode: bridge
exposed_ports:
- 5432
published_ports:
- 5432:5432
state: started
volumes:
- /opt/ebs_data:/var/lib/postgresql/data
Related
I am using docker-compose to spin up a spring api with a postgres database . I am new to docker and I am trying to persist my database using a named volume I created with
docker volume create employeedata
I add this volume inside my docker-compose.yml but the db does not persist if I stop or remove my containers .
version: '3.8'
services:
app:
container_name: springboot-postgresql
image: springboot-postgresql
build: ./
ports:
- "8080:8080"
depends_on:
- postgresqldb
postgresqldb:
image: postgres
ports:
- "5432:5432"
volumes:
- employeedata:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=employeedb
volumes:
employeedata:
I tried doing docker inspect employeedata and got the result below
It seems fishy to me that the docker-compose version is 2 and not 3 plus I don't understand how the mountpoint is related to the volume path I specify in my docker-compose.yml above
I would appreciate your help
docker compose created all objects in a project namespace. This namespace is usually the folder name of the docker-compose.yml file but you can set it by passing --project-name to (all) your calls to docker compose.
As docker does not have first class namespaces, the project name is simply used as a prefix for all objects defined in the compose file, so in this case, assuming your project was in a folder called "project" then compose would have created project_app as the container and project_employeedata as the volume.
To override this, you specify an explict container-name as you have done. But you really shouldn't as it means that any two deployments of compose files with this name will now conflict.
And to override it for volumes - tell docker compose that the volume is externally created and provide the external name. Otherwise compose will try to use the namespaced name.
volumes:
employeedata:
external: true
name: employeedata
Again, letting compose manage the volume name is probably the better option. Simply ensure the directory hosting the compose file has a unique name that is suitable - or ensure a suitable unique name is passed via --project-name, and then manage the volume using whatever_employeedata.
nb. Docker compose does not remove compose managed volumes unless -v / --volumes is passed to docker compose down so your data will persist here.
/var/lib/docker/volumes is simply the (default) location that docker will manage volumes.
The 2.0 refers to the version of compose, not the version in your compose.yml file.
When I created volume in Docker using command:
docker volume create pg-data
Then I set up basic postgresql database from postgres image:
docker run --rm -v pg-data:/var/lib/postgresql/data --name pg-docker -e POSTGRES_PASSWORD=docker -p 5433:5432 postgres
Everything worked fine. Database persist and I can even access it directly from the host. I created several roles here like app_user_1.
Now then I wanted to spin up postgresql in container using docker-compose. I shutdown the above postgresql container beforehand.
There I have this settting:
version: '3.7'
services:
db:
image: postgres
volumes:
- pg-data:/var/lib/postgresql/data/
expose:
- 5432
restart: always
environment:
- POSTGRES_PASSWORD=docker
- POSTGRES_USER=postgres
web:
build: .
volumes:
- ./app:/app
ports:
- 8001:8000
environment:
- ENVIRONMENT=dev
- TESTING=0
depends_on:
- db
volumes:
pg-data:
However it seems that even though I mapped the same volume and used same env settings as in docker run command the postgresql instance in container created with docker-compose has no databases and no roles at all.
I get the following error:
psql: error: FATAL: role "postgres" does not exist
or
psql: error: FATAL: role "app_user_1" does not exist
So it seems it behaves as though as it is different instance of postgresql.
When I restarted the first container with docker run everything was there (all the databases and roles).
Any idea why this is happening? How can I reuse the databases from the first container in the docker-compose?
You need to define the volume you wish to use (the one you created manually with docker volume create as external to docker-compose as it was created externally
This is because the volumes created by docker-compose are 'internal' to it, so using ones created by just docker are 'external'. =)
Ref the offical docs at https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose
The change to your compose file would be as follows:
...
volumes:
pg-data:
external: true
(Just that last line)
Hope that helps! =)
Additional Note
You can confirm this, by performing a docker volume ls | grep pg-data command which will list all volumes, then only show you the ones referencing 'pg-data'.
On my system where I was testing before I gave my answer, I get the following:
docker volume ls | grep pg-data
local pg-data
local postgresstackoverflow_pg-data
As you can see, the docker volume create one is listed first, as a local volume called 'pg-data', then the docker-compose.yml created one is next prefixed by the naming convention of docker-compose with the directory name that it was in at the time.
I'm a new Ambassador user here. I have walked thru the tutorial, in an effort to understand how use ambassador gateway. I am attempting to run this locally via Docker Compose until it's ready for deployment to K8s in production.
My use case is that all http traffic comes in on port 80, and then directed to the appropriate service. Is it considered best practice to have a docker-compose.yaml file in the working directory that refers to services in the /config directory? I ask because this doesn't appear to actually pickup my files (the postgres startup doesn't show in console). And when I run "docker ps" I only show:
CONTAINER ID IMAGE PORTS NAMES
8bc8393ac04c 05a916199684 k8s_statsd_ambassador-8564bfb874-q97l9_default_e775d686-a93c-11e8-9caa-025000000001_0
1c00f2341caf d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-q97l9_default_e775d686-a93c-11e8-9caa-025000000001_0
fe20c4819514 05a916199684 k8s_statsd_ambassador-8564bfb874-xzvkl_default_e775ffe6-a93c-11e8-9caa-025000000001_0
ba6415b028ba d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-xzvkl_default_e775ffe6-a93c-11e8-9caa-025000000001_0
9df07dc5083d 05a916199684 k8s_statsd_ambassador-8564bfb874-w5vsq_default_e773ed53-a93c-11e8-9caa-025000000001_0
682e1f9902a0 d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-w5vsq_default_e773ed53-a93c-11e8-9caa-025000000001_0
bb6d2f749491 quay.io/datawire/ambassador:0.40.2 0.0.0.0:80->80/tcp apigateway_ambassador_1
I have a docker-compose.yaml:
version: '3.1'
# Define the services/containers to be run
services:
ambassador:
image: quay.io/datawire/ambassador:0.40.2
ports:
- 80:80
volumes:
# mount a volume where we can inject configuration files
- ./config:/ambassador/config
postgres:
image: my-postgresql
ports:
- '5432:5432'
and in /config/mapping-postgres.yaml:
---
apiVersion: ambassador/v0
kind: Mapping
name: postgres_mapping
rewrite: ""
service: postgres:5432
volumes:
- ../my-postgres:/docker-entrypoint-initdb.d
environment:
- POSTGRES_MULTIPLE_DATABASES=db1, db2, db3
- POSTGRES_USER=<>
- POSTGRES_PASSWORD=<>
volumes and environment are not valid configs for Ambassador Mappings. Ambassador lets you proxy to postgres but the authentication has to be handled by your application.
Having said that, it looks like your Postgres container is not starting. (Perhaps because it needs an initial config). You can check for errors with:
$ docker ps -a | grep postgres
$ docker logs <container-id-from-previous-step>
You can also check a postgres docker compose example here.
Is it considered best practice to have a docker-compose.yaml file in the working directory that refers to services in the /config directory?
It's pretty standard, but you can use any directory you like for this.
I'm looking into how to mount volumes with docker-compose for data persistence but I'm having trouble understanding all the examples I read.
https://www.linux.com/learn/docker-volumes-and-networks-compose
version: '2'
services:
mysql:
image: mysql
container_name: mysql
volumes:
- mysql:/var/lib/mysql
...
volumes:
mysql:
Ok so this defines a volume named mysql at the bottom and it references this volume in
- mysql:/var/lib/mysql
How will the mysql image know to look in this volume named mysql? Is it just designed to look in all the volumes it has to store data or something?
Then in other examples I see the following:
services:
nginx:
image: nginx
depends_on:
- ghost
volumes:
- ./default.conf:/etc/nginx/conf.d/default.conf
ports:
- "80:80"
networks:
- proxy
This example doesn't need to define a volume, why is that?
your MySQL data will be stored in the named volume mysql which is created by:
volumes:
mysql:
You can list the docker volumes using docker volume ls and the 'path' will be something like: /var/lib/docker/volumes/mysql/date. When you cd in this folder you will see the same data as the data which is in your mysql container on path: /var/lib/mysql. If you exec inside your container you will see the same data.
How does it know how to use this path?
Well check the Dockerfile of mysql. Here is:
VOLUME /var/lib/mysql
In short: all the data of your mysql is stored in /var/lib/mysql inside your container and mounted to your named docker volume mysql on your host, which path is something like /var/lib/docker/volumes/mysql/data/.
The next part is mounting ./default.conf (on your host, relative path) on the path /etc/nginx/conf.d/default.conf inside your nginx container.
Nginx and ghost don't need a named volume in this case because they don't need to keep specific data. When you create your environment you will add data using Ghost (write blogs), but the data itself will be stored in the mysql database. Not in the Ghost container.
Remark (if your second example has nothing to do with the mysql example): the default image of ghost is working with the sqlite3 db which is inside the same container (=! microservice for each container so this is fine to develop, not in production). But if you would use this setup you need to create a named volume for your sqlite which is in the same container as ghost. Take a look to the dockerfile of ghost.
If you want to use mysql you probably need to mount a config file to your ghost container to tell the container: use mysql, you will not need a named docker volume for ghost then, because data won't be stored in the ghost container but in the mysql container.
To keep your last example persistent without using mysql with a named volume you have to add a volume for the sqlite db which is inside the ghost container for this path: /var/lib/ghost/content. Check the Dockerfile again to see this path.
This blog post explains how to setup ghost with mysql in docker-compose
I am using Docker Compose to run several containers, including one with a Postgres image. I am attempting to add a volume to that container to persist my data across container builds. However, I am receiving an error when it tries to create a directory for this volume within the container.
I run:
docker-compose build
then
docker-compose up
And I receive the following error:
ERROR: for cxbenchmark_db_1 Cannot start service db: oci runtime error: container_linux.go:265: starting container process caused "process_linux.go:368: container init caused \"rootfs_linux.go:57: mounting \\"/var/lib/docker/volumes/69845a017b4465e9122852a75ca194db473df95fa218658b8a60fb56eba9be9e/_data\\" to rootfs \\"/var/lib/docker/overlay2/627956d63fb0480448079577a83b0b54f83866fdf31136b7c669541c3f672355/merged\\" at \\"/var/lib/docker/overlay2/627956d63fb0480448079577a83b0b54f83866fdf31136b7c669541c3f672355/merged/var/lib/postgresql/data\\" caused \\"mkdir /var/lib/docker/overlay2/627956d63fb0480448079577a83b0b54f83866fdf31136b7c669541c3f672355/merged/var/lib/postgresql/data: permission denied\\"\""
My full docker-compose.yml looks like this (note the service called db where the volume is defined):
version: '3'
services:
nginx:
image: nginx:latest
ports:
- 80:8000
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- ./src/static:/static
depends_on:
- web
web:
build: .
command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn cx_benchmark.wsgi -b 0.0.0.0:8000"
depends_on:
- db
volumes:
- ./src:/src
- ./src/static:/static
expose:
- 8000
db:
image: postgres:latest
volumes:
- /private/var/lib/postgresql:/var/lib/postgresql
ports:
- 5432:5432
Any ideas for how to solve?
The error you are seeing is not a problem (necessarily) with the explicit volume bind mount in your compose file, but rather with the VOLUME declaration in the main postgres official Docker image Dockerfile:
VOLUME /var/lib/postgresql/data
Since you haven't provided a mount-point for this directory (but rather the parent), the docker engine is creating a local volume and then trying to mount that volume into your already bind-mounted location and getting a permissions error.
For clarity, here is the volume the docker engine created for you:
/var/lib/docker/volumes/69845a017b4465e9122852a75ca194db473df95fa218658b8a60fb56eba9be9e/_data
And here is the directory location at which it is trying to bind mount that dir; on top of your bind mount from /private/var/lib/postgresql:
mkdir /var/lib/docker/overlay2/627956d63fb0480448079577a83b0b54f83866fdf31136b7c669541c3f672355/merged/var/lib/postgresql/data: permission denied
Now, I think the reason this is failing is that you may have turned on user namespaces in your Docker engine ("userns-remap" flag/setting) such that the container doesn't have permissions to create a directory in that root-owned location on your host. Barring that, the only other option is that the postgres container is starting as a non-root user, but I don't see anything in your compose file or the official Dockerfile for the latest release that uses the USER directive.
As an aside, since you are ending up with double-volumes because your bind mount doesn't match the VOLUME specifier in the postgres Dockerfile, you could change your compose file to mount to /var/lib/postgresql/data and get around that extra volume being created. Especially if you expect your DB data to end up in /private/var/lib/postgresql, as it may be surprising to find it isn't there, but rather in the /var/lib/docker/volumes/.. location.