I keep getting
User "postgres" has no password assigned.
updated
.env
POSTGRES_PORT=5432
POSTGRES_DB=demo_db2
POSTGRES_USER=postgres
POSTGRES_PASSWORD=password
Even though the postgres password is set.
I'm trying to use the same variables from the following command
docker run --name demo4 -e POSTGRES_PASSWORD=password -d postgres
Could this be an issue with volumes ? im very confused.
I ran this command as well
docker run -it --rm --name demo4 -p 5432:5432 -e POSTGRES_PASSWORD=password -e POSTGRES_USER=postgress postgres:9.4
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: .
depends_on:
- database
ports:
- 8000:8000
environment:
- POSTGRES_HOST=database
database:
image: postgres:9.6.8-alpine
volumes:
- pgdata:/var/lib/postgresql/pgdata
ports:
- 8002:5432
react_client:
build:
context: ./client
dockerfile: Dockerfile
image: react_client
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 8001:8001
env_file:
- ./client/.env
volumes:
pgdata:
You are missing the inclusion of the .env file...
Docker composer:
database:
environment:
- ENV_VAR=VALUE
or
database:
env_file:
- .env
Plain Docker:
docker run options --env ENV_VAR=VALUE ...
or
docker run options --env-file .env ...`
Related
I have Django app with postgres database which locally I run with docker-compose. I would like to replace my local data in database with my backup from production server to make some optimizations.
my docker-compose.yml looks like:
version: '3.8'
services:
app:
build: .
links:
- db
depends_on:
- db
volumes:
- .:/code
ports:
- '8000:8000'
environment:
- POSTGRES_DB=DB
- POSTGRES_USER=USER
- POSTGRES_PASSWORD=PASSWORD
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
command: >
bash -c "python manage.py makemigrations --merge
&& python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
db:
image: postgres:12
environment:
- POSTGRES_DB=DB
- POSTGRES_USER=USER
- POSTGRES_PASSWORD=PASSWORD
volumes:
- pg_db:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
pg_db:
When I run docker volume ls I got my local driver and volume name. Could somebody give me a hint how to load data from my sql file to my local database?
Hello i'm new to docker and linux and i don't know how to run a script for my docker to run after my postgress
this is my script:
until psql -c '\l'; do
echo >&2 "$(date +%Y%m%dt%H%M%S) Postgres is unavailable - sleeping"
sleep 1
done
echo >&2 "$(date +%Y%m%dt%H%M%S) Postgres is up - executing command"
exec ${#}
my docker file :
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY wait-pg.sh ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
RUN chmod +x /wait-pg.sh
CMD ["yarn", "dev"]
my docker compose:
version: '3.7'
services:
db-pg:
image: postgres:12
container_name: db-pg
ports:
- '${DB_PORT}:5432'
environment:
ALLOW_EMPTY_PASSWORD: 'no'
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
volumes:
- ci-postgres-data:/data
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- db-pg
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '5'
volumes:
ci-postgres-data:
If someone can help me like I would in my docker compose and in my dockerfile.
I would like to know how I can add my script to my docker file and docker compose
Check the below Dockerfile
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY wait-pg.sh ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
CMD ["yarn", "dev"]
Please try the below docker-compose.yml in which depends_on, healthcheck and links are added as web service depends on db service.
version: "2.1"
services:
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
links:
- db-pg
depends_on:
- db-pg
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '5'
db-pg:
image: postgres:12
container_name: db-pg
ports:
- '${DB_PORT}:5432'
environment:
ALLOW_EMPTY_PASSWORD: 'no'
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASS}
POSTGRES_DB: ${DB_NAME}
volumes:
- ci-postgres-data:/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
volumes:
ci-postgres-data:
I've got the following docker-compose file and it serves up the application on port 80 fine.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static:
postgres_data:
Once in there I can log into the admin and add an extra user which gets saved to the database as I can reload the page and the user is still there. Once I stop the backend docker container however that user is gone. Given Postgres is operating in a different container and I'm not bringing it down I'm unsure how stopping the backend container and restarting it is causing the data not to be available.
Thanks in advance.
EDIT:
I'm bringing up the docker container with the following command.
docker-compose -f docker-compose.prod.yml up -d
I'm bringing down the container by just using docker desktop and stopping the container.
I'm running DJANGO 3 for the backend and I've also tried adding a superuser in the terminal when the container is running:
# python manage.py createsuperuser
Username (leave blank to use 'root'): mikey
Email address:
Password:
Password (again):
This password is too common.
Bypass password validation and create user anyway? [y/N]: y
Superuser created successfully.
Which works and the user appears while the container is running. However, once again when I shut the container down via docker desktop and then restart it that user that was just created is gone.
FURTHER EDIT:
settings.py using dotenv "from dotenv import load_dotenv"
DATABASES = {
"default": {
"ENGINE": os.getenv("SQL_ENGINE"),
"NAME": os.getenv("SQL_DATABASE"),
"USER": os.getenv("SQL_USER"),
"PASSWORD": os.getenv("SQL_PASSWORD"),
"HOST": os.getenv("SQL_HOST"),
"PORT": os.getenv("SQL_PORT"),
}
}
with the .env.prod file having the following values:
DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=db
SQL_PORT=5432
SOLUTION:
Read the comments to see the diagnosis by other legends but updated docker-compose file looks like this. Note the "depends_on" block.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
depends_on:
- db
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- "5432"
volumes:
static:
postgres_data:
FINAL EDIT:
Added the following code to my entrypoint.sh file to ensure Postgres is ready to accept connections by the backend container.
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
I'm trying to mount my postgres.conf and pg_hba.conf using docker-compose and having difficulty understanding why it work when run using docker-cli and doesn't with docker-compose
The following docker-compose causes the image to crash with error:
/usr/local/bin/docker-entrypoint.sh: line 176: /config_file=/etc/postgresql/postgres.conf: No such file or directory
docker-compose.yml
services:
postgres-master:
image: postgres:11.4
container_name: postgres-master
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
- /home/agilob/dockers/pg/data:/var/lib/postgresql/data:rw
- $PWD/pg:/etc/postgresql:rw
- /etc/localtime:/etc/localtime:ro
hostname: 'primary'
environment:
- PGHOST=/tmp
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres
- MAX_CONNECTIONS=10
- MAX_WAL_SENDERS=5
- PG_MODE=primary
- PUID=1000
- PGID=1000
ports:
- "5432:5432"
command: 'config_file=/etc/postgresql/postgres.conf hba_file=/etc/postgresql/pg_hba.conf'
This command works fine:
docker run -d --name some-postgres -v "$PWD/postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
also when I remove command: section and run the same docker-compose:
$ docker-compose -f postgres-compose.yml up -d
Recreating postgres-master ... done
$ docker exec -it postgres-master bash
root#primary:/# cd /etc/postgresql
root#primary:/etc/postgresql# ls
pg_hba.conf postgres.conf
The files are present in /etc/postgres.
Files in $PWD/pg are present:
$ ls pg
pg_hba.conf postgres.conf
The following works fine:
command: postgres -c config_file='/etc/postgresql/postgres.conf' -c 'hba_file=/etc/postgresql/pg_hba.conf'
I am trying to build an image and deploy it to a VPS.
I am running the app successfully with
docker-compose up
Then I build it with
docker build -t mystore .
When I try to run it for a test locally or on the VPS trough docker cloud:
docker run -p 4000:8000 mystore
The container works fine, but when I hit http://0.0.0.0:4000/
I am getting:
OperationalError at /
could not translate host name "db" to address: Name or service not known
I have changed the postgresql.conf listen_addresses to "*", nothing changes. The posgresql logs are empty. I am running MacOS.
Here is my DATABASE config:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'user',
'PASSWORD': 'password',
'HOST': 'db',
'PORT': '5432',
}
}
This is the Dockerfile
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN \
apt-get -y update && \
apt-get install -y gettext && \
apt-get clean
ADD requirements.txt /app/
RUN pip install -r /app/requirements.txt
ADD . /app
WORKDIR /app
EXPOSE 8000
ENV PORT 8000
CMD ["uwsgi", "/app/saleor/wsgi/uwsgi.ini"]
This is the docker-compose.yml file:
version: '2'
services:
db:
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
redis:
image: redis
ports:
- '6379:6379'
celery:
build:
context: .
dockerfile: Dockerfile
env_file: common.env
command: celery -A saleor worker --app=saleor.celeryconf:app --loglevel=info
volumes:
- .:/app:Z
links:
- redis
depends_on:
- redis
search:
image: elasticsearch:5.4.3
mem_limit: 512m
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- '127.0.0.1:9200:9200'
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
makemigrations:
build: .
command: python manage.py makemigrations --noinput
volumes:
- .:/app:Z
migration:
build: .
command: python manage.py migrate --noinput
volumes:
- .:/app:Z
You forgot to add links to your web image
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
links: # <- here
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
Check the available networks. There are 3 by default:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
db07e84f27a1 bridge bridge local
6a1bf8c2d8e2 host host local
d8c3c61003f1 none null local
I've a simplified setup of your docker compose. Only postgres:
version: '2'
services:
postgres:
image: postgres
name: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
networks:
random:
networks:
random:
I gave the postgres container the name postgres and called the service postgres, I created a network called 'random' (last commands), and I've added the service postgres to the network random. If you don't specify a network you will see that docker-compose creates its a selfnamed network.
After starting docker-compose, you will have 4 networks. A new bridge network called random.
Check in which network your docker compose environment is created by inspecting for example your postgres container:
Mine is created in the network 'random':
$ docker inspect postgres
It's in the network 'random'.
"Networks": {
"random": {..
Now start your mystore container in the same network:
$ docker run -p 4000:8000 --network=random mystore
You can check again with docker inspect. To be sure you can exec inside your mystore container and try to ping postgres. They are deployed inside the same network so this should be possible and your container should be able to translate the name postgres to an address.
in your docker-compose.yml, add a network and add your containers to it like so:
to each container definition add:
networks:
- mynetwork
and then, at the end of the file, add:
networks:
mynetwork: