Communication between two containers with Static IP - postgresql

Guy, I am trying to create an WAL redundancy on my machine with postgres and WAL technique, but I need to set some configurations with static ip into .conf files on postgres, I am using Docker and docker-compose to do that, but I am with a problem that, it is not working!! haha
version: '3'
services:
master:
build:
dockerfile: ./Master/Dockerfile
context: .
restart: always
image: master_psql
container_name: master
hostname: master
ports:
- 5432:5432
volumes:
- ./master_volume:/var/lib/postgresql
networks:
psql-network:
ipv4_address: 192.168.200.2
slave:
build:
dockerfile: ./Slave/Dockerfile
context: .
restart: always
image: slave_psql
container_name: slave
hostname: slave
ports:
- 5433:5432
volumes:
- ./slave_volume:/var/lib/postgresql
networks:
psql-network:
ipv4_address: 192.168.200.3
depends_on:
- master
networks:
psql-network:
driver: bridge
ipam:
config:
- subnet: 192.168.200.0/32
Here is my docker-compose I am trying to up with docker-compose up -d and receiving this error: ERROR: for master user specified IP address is supported only when connecting to networks with user configured subnets
My dockerfile is:
Master:
FROM postgres:12
RUN apt update && \
apt install -y vim-tiny \
net-tools
COPY ./Master/postgresql.conf /postgresql.conf
COPY ./Master/pg_hba.conf /pg_hba.conf
COPY ./Scripts/init.sql /docker-entrypoint-initdb.d/
COPY ./Scripts/set-config_master.sh /docker-entrypoint-initdb.d/set-config_master.sh
EXPOSE 5432
Slave:
FROM postgres:12
RUN apt update && \
apt install -y vim-tiny \
net-tools
COPY ./Slave/postgresql.conf /postgresql.conf
COPY ./Slave/recovery.conf /recovery.conf
COPY ./Scripts/init.sql /docker-entrypoint-initdb.d/
COPY ./Scripts/set-config_slave.sh /docker-entrypoint-initdb.d/set-config_slave.sh
EXPOSE 5433
Do you know how can I handle this issue? Or if you guys know an easy tutorial to follow to create this structure using docker, I am working on that about 2 or 3 days in a row.
Thank you!!!!
Edit:
docker network ls
NETWORK ID NAME DRIVER SCOPE
23f667e0fa76 aluradockercap06_production-network bridge local
56a50842c153 booksearcher_development-network bridge local
f1a8e3fa98a9 bridge bridge local
51d08684435f dockerfile_default bridge local
06ece1ec80aa host host local
75835a527e89 my-network bridge local
a566503cb776 none null local
e1c6830c371a postgres_psql-network bridge local
2754168d75b0 projeto_default bridge local

Related

Docker compose: Error: role "hleb" does not exist

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

Why is my flask server unable to speak to the postgres database using docker-compose?

I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)

Starting Tryton server with docker-compose file

I am trying to link an external postgres to tryton/tryton from docker hub.
docker-compose.yaml
version: '3.7'
services:
tryton-postgres:
image: postgres
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_DB=tryton
restart: always
gnuserver:
image: tryton/tryton:4.6
links:
- tryton-postgres:postgres
ports:
- 8000:8000
depends_on:
- tryton-postgres
entrypoint: /entrypoint.sh trytond
when i ssh into the container and run trytond-admin --all -d tryton it seems to be looking for sqlite file instead of the connected postgres database. Are there some env variagbles i must set? What am i missing in my docker compose file?
Instead of changing the configuration file, with Docker it is simpler to set environment variable like:
DB_USER=
DB_PASSWORD=
DB_HOSTNAME=tryton-postgres
DB_PORT=5432
you need to edit /etc/tryton/trytond.conf to look at postgresql:
uri = postgresql://USERNAME:PASSWORD#tryton-postgres:5432/
see the Docs

Cannot connect to postgres db using docker build

I am trying to build an image and deploy it to a VPS.
I am running the app successfully with
docker-compose up
Then I build it with
docker build -t mystore .
When I try to run it for a test locally or on the VPS trough docker cloud:
docker run -p 4000:8000 mystore
The container works fine, but when I hit http://0.0.0.0:4000/
I am getting:
OperationalError at /
could not translate host name "db" to address: Name or service not known
I have changed the postgresql.conf listen_addresses to "*", nothing changes. The posgresql logs are empty. I am running MacOS.
Here is my DATABASE config:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'user',
'PASSWORD': 'password',
'HOST': 'db',
'PORT': '5432',
}
}
This is the Dockerfile
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN \
apt-get -y update && \
apt-get install -y gettext && \
apt-get clean
ADD requirements.txt /app/
RUN pip install -r /app/requirements.txt
ADD . /app
WORKDIR /app
EXPOSE 8000
ENV PORT 8000
CMD ["uwsgi", "/app/saleor/wsgi/uwsgi.ini"]
This is the docker-compose.yml file:
version: '2'
services:
db:
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
redis:
image: redis
ports:
- '6379:6379'
celery:
build:
context: .
dockerfile: Dockerfile
env_file: common.env
command: celery -A saleor worker --app=saleor.celeryconf:app --loglevel=info
volumes:
- .:/app:Z
links:
- redis
depends_on:
- redis
search:
image: elasticsearch:5.4.3
mem_limit: 512m
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- '127.0.0.1:9200:9200'
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
makemigrations:
build: .
command: python manage.py makemigrations --noinput
volumes:
- .:/app:Z
migration:
build: .
command: python manage.py migrate --noinput
volumes:
- .:/app:Z
You forgot to add links to your web image
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
links: # <- here
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
Check the available networks. There are 3 by default:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
db07e84f27a1 bridge bridge local
6a1bf8c2d8e2 host host local
d8c3c61003f1 none null local
I've a simplified setup of your docker compose. Only postgres:
version: '2'
services:
postgres:
image: postgres
name: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
networks:
random:
networks:
random:
I gave the postgres container the name postgres and called the service postgres, I created a network called 'random' (last commands), and I've added the service postgres to the network random. If you don't specify a network you will see that docker-compose creates its a selfnamed network.
After starting docker-compose, you will have 4 networks. A new bridge network called random.
Check in which network your docker compose environment is created by inspecting for example your postgres container:
Mine is created in the network 'random':
$ docker inspect postgres
It's in the network 'random'.
"Networks": {
"random": {..
Now start your mystore container in the same network:
$ docker run -p 4000:8000 --network=random mystore
You can check again with docker inspect. To be sure you can exec inside your mystore container and try to ping postgres. They are deployed inside the same network so this should be possible and your container should be able to translate the name postgres to an address.
in your docker-compose.yml, add a network and add your containers to it like so:
to each container definition add:
networks:
- mynetwork
and then, at the end of the file, add:
networks:
mynetwork:

Docker-compose mongoose

I'm new to Docker, and I'm trying the simplest of setups with docker-compose, but don't succeed to connect to Mongodb.
My docker-compose.local.yaml file:
version: "2"
services:
posts-api:
build:
dockerfile: Dockerfile.local
context: ./
volumes:
- ".:/app"
ports:
- "6820:6820"
depends_on:
- mongodb
mongodb:
image: mongo:3.5
ports:
- "27018:27018"
command: mongod --port 27018
My Docker file:
FROM node:7.8.0
MAINTAINER Livefeed 'project.livefeed#gmail.com'
RUN mkdir /app
VOLUME /app
WORKDIR /app
ADD package.json yarn.lock ./
RUN eval rm -rf node_modules && \
yarn
ADD server.js .
RUN mkdir config src
ADD config config/
ADD src src/
EXPOSE 6820
EXPOSE 27018
CMD yarn run local
In server.js I try to connect with:
mongoose.connect('mongodb://localhost:27018');
I also tried:
mongoose.connect('mongodb://mongodb:27018');
To run docker-compose:
docker-compose -f docker-compose.local.yaml up --build
And I receive the error:
connection error: { MongoError: failed to connect to server [localhost:27018] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27018]
What am I missing?
In server.js use mongodb instead of localhost:
mongoose.connect('mongodb://mongodb:27018');
Because containers in the same network can communicate using their service name.
Bear in mind that each container and your host have their own localhost. Each localhost is a different host: container A, container B, your host (each one has its own network interface).
Edit:
Be sure to get your mongo up:
docker-compose logs mongodb
docker-compose ps
Sometimes it doesn't get up because of disk space.
Edit 2:
With newer versions of mongo, you need to specify to listen to all interfaces too:
command: mongod --port 27018 --bind_ip_all
I think, that you should add links option in your config. Like this:
ports:
- "6820:6820"
depends_on:
- mongodb
links:
- mongodb
update
As I promised
version: '2.1'
services:
pm2:
image: keymetrics/pm2-docker-alpine:6
restart: always
container_name: pm2
volumes:
- ./pm2:/app
links:
- redis_db
- db
environment:
REDIS_CONNECTION_STRING: redis://redis_db:6379
nginx:
image: firesh/nginx-lua
restart: always
volumes:
- ./nginx:/etc/nginx
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
links:
- pm2
s3: # mock for development
image: lphoward/fake-s3:latest
redis_db:
container_name: redis_db
image: redis
ports:
- 6379:6379
db: # for scorebig-syncer
image: mysql:5.7
ports:
- 3306:3306