Docker swarm execute command in a container - postgresql

So I'm setting up with Docker swarm.
I am now cool with the docker stack deploy -c docker-compose.yml myapp command which replaces my former docker-compose up.
But one of my service is my DB and I need to pgrestore inside it.
Previously with compose, I would run:
docker-compose run --rm postgres pg_restore --rest-of-command
How can I do the same with stack deploy?
Unfortunately, the container created with compose is not the same as the one from stack deploy: the first one is called myapp_postgres while the second myapp_postgres.1.zamd6kb6cy4p8mtfha0gn50vh.
I guess I could write something like docker exec 035803286af0 but then I loose all the benefits of the config from docker-compose.yml, which in this case is:
postgres:
env_file:
- ./.env
image: postgres:11.0-alpine
volumes:
- "..:/app" # toe make the dump accessible to the container
- "/var/run/postgresql:/var/run/postgresql"
So this solution is not very IaC.
So ain't there a docker service run or something?
Thanks

You can follow docker image docs (Initialization scripts section):
and create *.sh script under /docker-entrypoint-initdb.d which will run pg_restore ... when Postgres container will run as part of the Docker service.
It doesn't seem to be a direct answer to your question, however it may achieve your goal of restoring the dump during Postgres initialization.

Related

Why isn't Docker Compose honoring my POSTGRES_USER environment variable?

I know lots of questions sound like this, and they all have the same answer: delete your volumes to force it to reinitialize.
The problem is, I'm being careful to delete my volumes, but it's consistently spinning up the container incorrectly every time.
My docker-compose.yml
version: "3.1"
services:
db:
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=changeme
- POSTGRES_USER=myuser
image: postgres
My process:
$ docker volume ls
DRIVER VOLUME NAME
$ docker-compose up -v # or docker-compose up --force-recreate
yet it always creates the "postgres" user instead of myuser. The output when it starts up shows that it "will be owned by user 'postgres'" and I can only docker exec as postgres, not my user.
The instructions seem very straightforward. Am I missing something, or is this a bug?
What happens when you use the compose file above?
I can only docker exec as postgres, not myuser
The environment variable POSTGRES_USER controls the database user, not the linux user. Take a look at the chapter Arbitrary --user Notes in the documentation to learn how to change the linux user.

Creating mongo docker container with local storage on hos

I want to run mongo db in docker container. I've pulled image and run it. So it seems work ok.
But every time I start it the DB is overwritten so I loose any changes. So I want to want to map somehow internal container storage on my local host folder.
Should I write Dockerfile or/and docker-compose.yaml? I suppose this is simple question but being new in docker I can't understand what to read to get full understanding.
You do not need to write Dockerfile and make thing complex, just use offical image as mentioned in command or compose file.
You can use both options either docker run or docker-compose but the path should be correct in mapping to keep data persistent.
Here is way
Create a data directory on a suitable volume on your host system, e.g. /my/own/datadir.
Start your mongo container like this:
$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo
The -v /my/own/datadir:/data/db part of the command mounts the
/my/own/datadir directory from the underlying host system as /data/db
inside the container, where MongoDB by default will write its data
files.
mongo docker volume
with docker-compose
version: "2"
services:
mongo:
image: mongo:latest
restart: always
ports:
- "27017:27017"
environment:
- MONGO_INITDB_DATABASE=pastime
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=root_password
volumes:
- /my/own/datadir:/data/db

How can i persist my data in docker/postgres container?

I know there are probably many ways to do this. What I am looking for is a way to do it using (preferably) only my DockerFile and one container.
Here is my current dockerfile:
FROM postgres:latest
ENV POSTGRES_USER=myuser
ENV POSTGRES_PASSWORD=mypassword
Here is the command I used to build this container:
docker built -t my_db .
And here is the command that I use to run the container:
docker run -p 5432:5432 my_db
What I would like to do is have the data stored in the container if possible, but I don't seem to understand how or where postgres stores it's data. I saw on another stack overflow post that postgres will store it by default in /var/lib/postgresql/data however when I look in that folder I see nothing. I can however verify that postgres is running because I am using a client called teamSQL and from that client I can create tables and insert/read data.
I can also verify that when i stop the container and restart the data is definitely not persisted.
Note: this is running in OSx but I don't think that is relevant.
You should use Docker volumes, so when you stop your container, data will persist on host machine, and when you start container again data will be mounted to it
docker volume create pgdata
docker run -p 5432:5432 -v pgdata:/var/lib/postgresql/data my_db

Creating a running Postgres service inside a docker container

I'm a bit new to Docker.
I have two containers running using docker-compose.
One is the API and the other is the actual application.
I want to add a new DB container using the Postgres official image.
It's a bit hard to find a simple tutorial on how to create the container and populate it with a predefined sql file (of schemas and data).
When I start with "CMD /etc/init.d/postgresql start" in the Dockerfile I get an error saying: "No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning)."
Since it takes me too much time to get things going I was wondering if it might be better to get an Ubuntu image and install Postgres on my own since there is only one source on how to use the image - docker hub, and I don't seem to understand it that well.
Any ideas or simple steps on how to compose and 'configure' this image?
If you want populate your database with some file, A simply way to do this is:
How to extend this image
If you would like to do additional initialization in an image derived
from this one, add one or more *.sql, *.sql.gz, or *.sh scripts under
/docker-entrypoint-initdb.d (creating the directory if necessary).
After the entrypoint calls initdb to create the default postgres user
and database, it will run any *.sql files and source any *.sh scripts
found in that directory to do further initialization before starting
the service.
Dockerfile
FROM postgres:alpine
COPY init.sql /docker-entrypoint-initdb.d/init.sql
docker-compose.yml
version: '3'
services:
app:
//your app definition
postgres:
build: .
Pull the postgres image
docker pull postges:14.2
Create the service with the below command
docker service create --name postgres --network my_overlay --env "POSTGRES_PASSWORD=password" --publish 5432:5432 postgres:14.2
Try to connect using userName as postgres and password as password to the default postgres db.
jdbc:postgresql://127.0.0.1:5432/postgres // JDBC connection

Expose mongo port in other container

I have this (custom) container which runs a java program which requires mongo locally. Now, with docker I would like to setup mongo in its own container. So I guess, in order to expose this 27017 port locally in this java-container I need to setup an SSH-tunnel, right ? If there is a easier way please let me know.
So, there is this official mongo image image, but I get the impression ssh is not install or running. What would be the best approach to do this?
UPDATE: I've rephrased the question more focussed on port-forwarding here
You have to make your container run on the same network. No need to ssh into your mongo or app container.
https://docs.docker.com/engine/userguide/networking/
First define a network
docker network create --driver bridge isolated_nw
Start you containers using that newly network
docker run -p 27017:27017 --network=isolated_nw -itd --name=mongo-cont mongo
docker run --network=isolated_nw -itd --name=app your_image
The image of mongo includes EXPOSE 27017 so from your app container, you should be able to access to the mongo container using its name mongo-cont
You can build your custom image on top of mongodb official image, which gives you the flexibility to install additional required packages.
FROM mongo:latest
RUN apt-get install ssh
Also try to use docker-compose to build and link your containers together, it will ease the process greatly.
version: '2'
services:
mongo:
image: mongo:latest
ports:
- "27017"
custom_project:
build:
context: . # Parent directory address of Dockerfile
dockerfile: Dockerfile-Custom # Name of Dockerfile
command: /root/docker-entrypoint.sh
This is the image used for mongodb official image.
You are trying to SSH into your container to gain access to it, but that isn't how you connect. Docker provides functionality to securely connect via the following methods.
Connect into a running container - Docs:
docker exec -it <container name> bash
$ root#665b4a1e17b6:/#
Start a container from image, and connect to it - Docs:
docker run -it <image name> bash
$ root#665b4a1e17b6:/#
Note: If it is an Alpine based image, it may not have Bash installed. In that case using sh instead of bash in your commands should work. Mongo's Dockerfile looks to use debian:jessie which will have bash support.