I got postgresql and pgadmin4 with docker swarm on my ubuntu 18.04 server, but pgadmin gives me errors and after a while the application crashes and I can't enter postgres: this is the error
PermissionError: [Errno 1] Operation not permitted: '/var/lib/pgadmin/sessions'
WARNING: Failed to set ACL on the directory containing the configuration database:
[Errno 1] Operation not permitted: '/var/lib/pgadmin'
HINT : You may need to manually set the permissions on
/var/lib/pgadmin to allow pgadmin to write to it.
/usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
return io.open(fd, *args, **kwargs)
[2020-09-04 21:12:24 +0000] [1] [INFO] Shutting down: Master
[2020-09-04 21:12:24 +0000] [1] [INFO] Reason: Worker failed to boot.
WARNING: Failed to set ACL on the directory containing the configuration database:
[Errno 1] Operation not permitted: '/var/lib/pgadmin'
HINT : You may need to manually set the permissions on
/var/lib/pgadmin to allow pgadmin to write to it.
NOTE: Configuring authentication for SERVER mode.
sudo: setrlimit(RLIMIT_CORE): Operation not permitted
[2020-09-04 21:14:26 +0000] [1] [INFO] Starting gunicorn 19.9.0
[2020-09-04 21:14:26 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
[2020-09-04 21:14:26 +0000] [1] [INFO] Using worker: threads
/usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
return io.open(fd, *args, **kwargs)
[2020-09-04 21:14:26 +0000] [89] [INFO] Booting worker with pid: 89
in error I see that it tells me NOTE: Configuring authentication for SERVER mode.
but I don't know how to configure what it indicates, someone could help me solve my problem.
Thank you
Edit:
docker-compose.yml
version: '3'
services:
ssl:
image: danieldent/nginx-ssl-proxy
restart: always
environment:
UPSTREAM: myApp:8086
SERVERNAME: dominio.com
ports:
- 80:80/tcp
- 443:443/tcp
depends_on:
- myApp
volumes:
- ./nginxAPP:/etc/letsencrypt
- ./nginxAPP:/etc/nginx/user.conf.d:ro
bdd:
restart: always
image: postgres:12
ports:
- 5432:5432/tcp
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: 12345
POSTGRES_DB: miBDD
volumes:
- ./pgdata:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4
ports:
- 9095:80/tcp
environment:
PGADMIN_DEFAULT_EMAIL: user
PGADMIN_DEFAULT_PASSWORD: 12345
PROXY_X_FOR_COUNT: 3
PROXY_X_PROTO_COUNT: 3
PROXY_X_HOST_COUNT: 3
PROXY_X_PORT_COUNT: 3
volumes:
- ./pgadminAplicattion:/var/lib/pgadmin
myApp:
restart: always
image: appImage
ports:
- 8086:8086
depends_on:
- bdd
working_dir: /usr/myApp
environment:
CONFIG_PATH: ../configuation
command: "node server.js"
It's generally a bad idea to use bind mounts in a non-development environments and doubly so when it comes to Docker Swarm (as opposed to regular Docker). This goes doubly when it comes to images like postgres or dpage/pgadmin4, which require those mounted directories to have specific ownership and/or read/write priviledges.
In your case, you need to run:
sudo chown 999:999 pgdata
sudo chown 5050:5050 pgadminAplicattion
to give those directories correct ownership.
That being said, it's a much better idea to avoid bind mounts entirely and use named volumes instead (irrelevant parts of Compose file skipped):
version: "3"
services:
bdd:
restart: always
image: postgres:12
ports:
- 5432:5432/tcp
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: 12345
POSTGRES_DB: miBDD
volumes:
- pgdata:/var/lib/postgresql/data
pgadmin:
restart: always
image: dpage/pgadmin4
ports:
- 9095:80/tcp
environment:
PGADMIN_DEFAULT_EMAIL: user
PGADMIN_DEFAULT_PASSWORD: 12345
PROXY_X_FOR_COUNT: 3
PROXY_X_PROTO_COUNT: 3
PROXY_X_HOST_COUNT: 3
PROXY_X_PORT_COUNT: 3
volumes:
- pgadmin:/var/lib/pgadmin
volumes:
pgdata:
pgadmin:
In your case you need to use following command: Try this
sudo chown -R 5050:5050 /var/lib/pgadmin
Related
Invokement of a following task:
task__determine_order_details_processing_or_created_status.apply_async(
args=[order_record.Order_ID],
eta=datetime.now(GMT_timezone)+timedelta(minutes=1)
)
Ends up in the workers' timeout. It looks like the method is never releasing the worker to continue its job
web_1 | [2019-11-21 05:43:43 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:1559)
web_1 | [2019-11-21 05:43:43 +0000] [1559] [INFO] Worker exiting (pid: 1559)
web_1 | [2019-11-21 05:43:43 +0000] [1636] [INFO] Booting worker with pid: 1636
Whereas, the same command invoked with the usage of Django shell creates a completely working celerty task:
celery_1 | [2019-11-21 05:47:06,500: INFO/MainProcess] Received task: task__determine_order_details_processing_or_created_status[f94708be-a0ab-4853-8785-a11c8c7ca9f1] ETA:[2019-11-21 05:48:06.304924+00:00]
docker-compose.yml:
web:
build: ./server
command: gunicorn server.wsgi:application --reload --limit-request-line 16376 --bind 0.0.0.0:8001
volumes:
- ./server:/usr/src
expose:
- 8001
env_file: .env.dev
links:
- memcached
depends_on:
- db_development_2
- redis
db_development_2:
restart: always
image: postgres:latest
volumes:
- postgres_development3:/var/lib/postgresql/volume/
env_file: .env.dev
logging:
driver: none
redis:
image: "redis:alpine"
restart: always
logging:
driver: none
celery:
build: ./server
command: celery -A server.celery worker -l info
env_file: .env.dev
volumes:
- ./server:/usr/src
depends_on:
- db_development_2
- redis
restart: always
celery-beat:
build: ./server
command: celery -A server.celery beat -l info
env_file: .env.dev
volumes:
- ./server:/usr/src
depends_on:
- db_development_2
- redis
restart: always
logging:
driver: none
Can you please share more details?
Error is from gunicorn right?
Are you running this in docker environment? Celery on different container?
What does WSGI<-->YOUR_APP command look like?
example:
gunicorn app.wsgi:tour_application -w 6 -b :8000 --timeout 120
Can you try with more time-out like 120 in above eg.?
I make in docker php/postgres / phppgadmin and I need to add psql to my project to upload my sql dumps.
I found this https://hub.docker.com/r/softwareplant/psql and added it to my project, but running build command I got error :
docker-compose up -d --build
...
Sccessfully built 3b0700d7bed8
Successfully tagged lprodsdocker_web:latest
Pulling psql (softwareplant/psql:latest)...
ERROR: manifest for softwareplant/psql:latest not found: manifest unknown: manifest unknown
My docker-compose.yml :
version: '3'
services:
web:
build:
context: ./web # directory of web/Dockerfile.yml
dockerfile: Dockerfile.yml
environment:
- APACHE_RUN_USER=#1000
# - APACHE_RUN_USER=www-data
container_name: lprods_web
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
ports:
- 8086:80
working_dir: ${APP_PTH_CONTAINER}
db:
image: postgres:9.6.10-alpine
container_name: lprods_db
ports:
- '5433:5432'
restart: always
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: '1'
POSTGRES_DB: 'wprods'
volumes:
- ./init:/docker-entrypoint-initdb.d/
phppgadmin:
image: dockage/phppgadmin:latest
environment:
- PHP_PG_ADMIN_SERVER_HOST=db
- PHP_PG_ADMIN_SERVER_PORT=5432
- PHP_PG_ADMIN_SERVER_DEFAULT_DB=postgres
container_name: lprods_phppgadmin
restart: always
ports:
- 8087:80
- "443:443"
links:
- db
psql:
image: softwareplant/psql
environment:
- PHP_PG_ADMIN_SERVER_HOST=db
- PHP_PG_ADMIN_SERVER_PORT=5432
- PHP_PG_ADMIN_SERVER_DEFAULT_DB=postgres
container_name: lprods_phppgadmin
restart: always
ports:
- 8087:80
- "443:443"
links:
- db
composer:
image: composer:1.6
container_name: lprods_composer
volumes:
- ${APP_PATH_HOST}:${APP_PTH_CONTAINER}
working_dir: ${APP_PTH_CONTAINER}
command: composer install --ignore-platform-reqs
Which way is correct ? On https://hub.docker.com there is such image...
Is it invalid ? Can you advice some other decision ?
MODIFIED :
Next I try
https://hub.docker.com/r/governmentpaas/psql
Provides psql Postgres client.
description seems like what I need : to upload dump into postgres db
In docker-compose.yml I added item :
psql:
image: governmentpaas/psql
environment:
- PHP_PG_ADMIN_SERVER_HOST=db
- PHP_PG_ADMIN_SERVER_PORT=5432
- PHP_PG_ADMIN_SERVER_DEFAULT_DB=postgres
container_name: lprods_psql
restart: always
ports:
- "8088:80"
- "444:444"
links:
- db
and it was installed ok and I see this container in hosting OS:
lprods_docker$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3342c0b0df4 dockage/phppgadmin:latest "/sbin/entrypoint ap" 16 minutes ago Up 16 minutes 0.0.0.0:443->443/tcp, 0.0.0.0:8087->80/tcp lprods_phppgadmin
3ffa2823257a governmentpaas/psql "/bin/sh" 16 minutes ago Restarting (0) 42 seconds ago lprods_psql
7071eaf067d6 lprodsdocker_web "docker-php-entrypoi" 17 minutes ago Up 16 minutes 0.0.0.0:8086->80/tcp lprods_web
4372e269daf8 postgres:9.6.10-alpine "docker-entrypoint.s" 17 minutes ago Up 16 minutes 0.0.0.0:5433->5432/tcp lprods_db
But entering bash container no psql command found :
# uname -a
Linux 7071eaf067d6 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 GNU/Linux
root#7071eaf067d6:/var/www/lprods_docker_root# whereis psql
# psql -v
bash: psql: command not found
Is it image I need ? If yes, how to use it ?
You have to specify the tag of the image, e.g.:
version: '3'
services:
[...]
psql:
image: softwareplant/psql:dev-jira7
[...]
I'm writing a docker-compose file to launch some services. But the db service is a trouble maker, I always get this error:
FATAL: password authentication failed for user "postgres"
DETAIL: Password does not match for user "postgres".
Connection matched pg_hba.conf line 95: "host all all all md5"
I've read a lot of threads, and I've correctly set the POSTGRES_USER and POSTGRES_PASSWORD. I have also remove the previous volumes and container to force postgresql to re-init the password. But I can't figure out why it's still not working.
So what is the correct way to force the re-initialization of the postgresql image. So I would be able to connect to my database.
I've seen that this error: Connection matched pg_hba.conf line 95: "host all all all md5", and I've heard about the postgres conf file. But it's an official container it's supposed to work, isn't it ?
version: '3'
services:
poll:
build: poll
container_name: "poll"
ports:
- "5000:80"
networks:
- poll-tier
environment:
- REDIS_HOST=redis
depends_on:
- redis
worker:
build: worker
container_name: "worker"
networks:
- back-tier
environment:
- REDIS_HOST=redis
- POSTGRES_HOST=db
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
depends_on:
- redis
- db
redis:
image: "redis:alpine"
container_name: "redis"
networks:
- poll-tier
- back-tier
result:
build: result
container_name: "result"
ports:
- "5001:80"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_HOST=db
- RESULT_PORT=80
networks:
- result-tier
depends_on:
- db
db:
image: "postgres:alpine"
container_name: "db"
restart: always
networks:
- back-tier
- result-tier
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=postgres
volumes:
db-data:
driver: local
networks:
poll-tier: {}
back-tier: {}
result-tier: {}
I'm expected to get the db connected, and not password authentication failed for user "postgres".
Make sure your APPs (not the database container) are actually using the POSTGRES_USER and POSTGRES_PASSWORD variables. I suspect they are looking for something like DB_USER or similar and so aren't getting the right values in.
By default, every PostgreSQL database driver and admin tool defaults to the postgres user. This may explain why the error message complains about postgres even if the environment variable isn't being used.
A good way to verify is to change all references to the database user in the docker-compose file to something like postgres2. I suspect you'll still see apps complaining that password auth failed for postgres.
In my case, it was caused by postgres.exe service on Windows 10 running in the background. When I stopped the service, uninstalled PostgreSQL 12 from Windows, and restarted, I could finally connect to Postgres Docker container.
Currently building a package to test some devOps configurations with AWS. Building an application with Swift Vapor3, PostgreSQL 11, Docker. Given my github Repo the project builds/tests/runs just fine with vapor build vapor test vapor run given that you have a local installation of postgresql installed with a username: test, password: test
However my api is not connecting to my DB and am worried my configuration is wrong.
version: "3.5"
services:
api:
container_name: vapor_it_container
build:
context: .
dockerfile: web.Dockerfile
image: api:dev
networks:
- vapor-it
environment:
POSTGRES_PASSWORD: 'test'
POSTGRES_DB: 'test'
POSTGRES_USER: 'test'
POSTGRES_HOST: db
POSTGRES_PORT: 5432
ports:
- 8080:8080
volumes:
- .:/app
working_dir: /app
stdin_open: true
tty: true
entrypoint: bash
restart: always
depends_on:
- db
db:
container_name: postgres_container
image: postgres:11.2-alpine
restart: unless-stopped
networks:
- vapor-it
ports:
- 5432:5432
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_HOST: db
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
volumes:
- database_data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: test#test.com
PGADMIN_DEFAULT_PASSWORD: admin
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- vapor-it
restart: unless-stopped
networks:
vapor-it:
driver: bridge
volumes:
database_data:
pgadmin:
# driver: local
Also while reading the Docker postgres docs I came across this in the "Caveats" section.
If there is no database when postgres starts in a container, then postgres will create the default database for you. While this is the expected behavior of postgres, this means that it will not accept incoming connections during that time. This may cause issues when using automation tools, such as docker-compose, that start several containers simultaneously.postgres dockerhub
I have not made those changes because I am not sure how to go about making that file or how the configuration would look. Has anyone done something like this that has some experience with connecting to Postgresql and using vapor as a back end?
The theory is, a well-behaved container should be able to gracefully handle not having its dependencies running, because despite the best efforts of your container scheduler, containers may come and go. So if your app needs a DB, but at any given moment the DB is unavailable, it should respond rationally. For example, returning a 503 for an HTTP request, or trying again after a delay for a scheduled task.
That’s theory though, and not always applicable. In your situation, maybe you really do just need your Vapor app to wait for Postgres to come available, in which case you could use a wrapper script that polls your DB and only starts your main app after the DB is ready.
See this suggested wrapper script from the Docker docs:
#!/bin/sh
# wait-for-postgres.sh
set -e
host="$1"
shift
cmd="$#"
until PGPASSWORD=$POSTGRES_PASSWORD psql -h "$host" -U "postgres" -c '\q'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
>&2 echo "Postgres is up - executing command"
exec $cmd
command: ["./wait-for-postgres.sh", "db", "vapor-app", "run"]
I have my docker installed in Windows. I am trying to install this application. It has given me the following docker-compose.yml file:
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "8085:80"
networks:
- attendizenet
volumes:
- .:/usr/share/nginx/html/attendize
depends_on:
- php
php:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
networks:
- attendizenet
php-worker:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
command: php artisan queue:work --daemon
networks:
- attendizenet
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
networks:
- attendizenet
maildev:
image: djfarrelly/maildev
ports:
- "1080:80"
networks:
- attendizenet
redis:
image: redis
networks:
- attendizenet
networks:
attendizenet:
driver: bridge
All the installation goes well, but the PostgreSQL container stops after starting for a moment giving following error.
2018-03-07 08:24:47.927 UTC [1] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2018-03-07 08:24:47.927 UTC [1] HINT: The server must be started by the user that owns the data directory
A simple PostgreSQL container from Docker Hub works smoothly, but the error occurs when we try to attach a volume to the container.
I am new to docker, so please ignore usage of terms wrongly.
This is a documented problem with the Postgres Docker image on Windows [1][2][3][4]. Currently, there doesn't appear to be a way to correctly mount Windows directories as volumes. You could instead use a persistent Docker volume, for example:
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- attendizenet
volumes:
pgdata:
Other things that didn't work:
Set PGDATA to a subdirectory (See PGDATA Setting)
environment:
- PGDATA=/var/lib/postgresql/data/mnt
volumes:
- ./pgdata:/var/lib/postgresql/data
Use a Bind Mount (docker-compose 3.2)
volumes:
- type: bind
source: ./pgdata
target: /var/lib/postgresql/data
Running as POSTGRES_USER=root
More Information:
GitHub
data directory "/var/lib/postgresql/data" has wrong ownership
Docker Forums
postgresql-data-pgdata-has-wrong-ownership
postgres-to-work-on-persistent-windows-mount
Please refer reinierkors' answer from here. The answer is as follows copied as is from the link here for reader's convenience and works for me
I solved this by mapping my local volume one directory below the one Postgres needs:
version: '3'
services:
postgres:
image: postgres
restart: on-failure
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=postgres
volumes:
- ./postgres_data:/var/lib/postgresql
ports:
- 5432:5432
I was having the same issue after downgrading my Docker from WSL 2 to WSL 1 and what Thomas Taylor's pertaining, I solved the issue by using named volume.
version: '3.8'
services:
postgres:
image: timescale/timescaledb:latest-pg12
...
volumes:
- pgdata:/var/lib/postgresql/data
...
volumes:
pgdata:
Map the local volume (e.g. C:\docker\pgdata) to one level (one directory) above what PostgreSQL needs. You can also do it from command line when starting the docker:
docker run -itd -e POSTGRES_USER=pguser -e POSTGRES_PASSWORD=pgpasswd \
-e PGDATA=/var/lib/postgresql/data/pgdata -p 5432:5432 \
-v c:\docker\pgdata:/var/lib/postgresql --name postgresql postgres
I met this issue when re-installed docker and used wsl-1 backend.
solution: switch docker to wsl-2 backend.
Even i had the problem i had to copy the data dir at regular intervals.
docker cp <container-name>:/var/lib/postgresql/data C:/docker/volumes/postgres
Owner for the data folder in postgres inside the container is Postgres user. Your current user may not have access privilege in the mounted folder. You need to give all permissions according to the requirements by given command below :
chmod 777 ./docker/pgdata
If this command is not helping to resolve this issue please refer the following link to do the user mapping from inside the container to outside the container.
https://docs.docker.com/engine/security/userns-remap/#prerequisites