Docker ownership of '/var/lib/postgresql/data' - postgresql

Docker-compose is not running and I don't know why. There's this question of
chown: changing ownership of '/var/lib/postgresql/data': Operation not permitted
At a suggestion of a member on the docker community slack channel I installed the homebrew for Docker, but that hasn't managed to solve the problem. There was another stackoverflow post that suggested cc'ing inside the container and changing the permissions, but that doesn't make sense to me - the /var/lib/postgresql/data is created on startup.
Here is the docker-compose file -
version: "3.9"
services:
db:
restart: always
image: postgres
volumes:
- ./docker-entrypoint-initdb.d/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_NAME=dev-postgres
- POSTGRES_USER=pixel
- POSTGRES_DATABASE=lightchan
- POSTGRES_PASSWORD=stardust
web:
build: .
restart: always
volumes:
- .:/code
command: sh -c "./waitfor.sh db:5432 -- python3 manage.py runserver"
ports:
- "8001:8001"
environment:
- POSTGRES_NAME=dev-postgres
- POSTGRES_USER=pixel
- POSTGRES_DATABASE=lightchan
- POSTGRES_PASSWORD=stardust
- POSTGRES_HOST=db

Related

Docker pgadmin 4 - error: "does not appear to be a valid email address. Please reset the PGADMIN_DEFAULT_EMAIL environment variable"

Please bear with me, I'm rather new to docker.
I've got the following docker-compose.yaml file from my colleague who runs this on windows - apparently without problems:
version: "3.3"
services:
mysql-server:
image: mysql:8.0.19
restart: always
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- mysql-data:/var/lib/mysql
ports:
- "33061:33061"
phpmyadmin:
image: phpmyadmin/phpmyadmin:5.1.1
restart: always
environment:
PMA_HOST: mysql-server
PMA_USER: ${PMA_USER}
PMA_PASSWORD: ${PMA_PASSWORD}
UPLOAD_LIMIT: 256M
MAX_EXECUTION_TIME: 0
ports:
- "8080:80"
volumes:
- ./database/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php
postgresdb:
container_name: pg_container
image: postgres:latest
restart: always
ports:
- "54321:54321"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
depends_on:
- postgresdb
image: dpage/pgadmin4:5
restart: always
ports:
- "5556:80"
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
volumes:
- pgadmin:/var/lib/pgadmin
web:
build:
context: .
dockerfile: dockerfile-python
command: python3 manage.py runserver 0.0.0.0:8000
container_name: python_myApp
volumes:
- .:/theApp
ports:
- "8000:8000"
depends_on:
- postgresdb
volumes:
mysql-data:
postgres:
pgadmin:
I run it on Linux, version is: Docker version 20.10.9, build c2ea9bc
Problem is, container pgadmin won't start up - it gives me the following error:
'"server#myapp.de"' does not appear to be a valid email address. Please reset the PGADMIN_DEFAULT_EMAIL environment variable and try again.
The .env file looks like that:
PMA_USER="root"
PMA_PASSWORD="XXXX"
POSTGRES_DB='postgres'
POSTGRES_USER='admin'
POSTGRES_PASSWORD='XXXX'
PGADMIN_DEFAULT_EMAIL="server#myapp.de"
PGADMIN_DEFAULT_PASSWORD="XXXX"
I tried to reset everything by doing a
docker system prune
docker volume prune
but the error persists. What's going wrong here?
thanks!
You don't need any " in env files, just remove them
PMA_USER=root
PMA_PASSWORD=XXXX
POSTGRES_DB=postgres
POSTGRES_USER=admin
POSTGRES_PASSWORD=XXXX
PGADMIN_DEFAULT_EMAIL=server#myapp.de
PGADMIN_DEFAULT_PASSWORD=XXXX

Airflow via docker-compose keeps trying to access sqlite although postgres configured

I try to set up a Dockerized airflow instance, but whatever I do (so far..) it keeps trying to access some sqlite3 database where I do not know where the instruction comes from. I point to the Postgres instance everywhere (deemed) possible through AIRFLOW__CORE__SQL_ALCHEMY_CONN, and even AIRFLOW_CONN_METADATA_DB.
A typical error message when starting up is like:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: job
Full docker-compose.yml:
version: '3'
x-airflow-common:
&airflow-common
image: apache/airflow:2.0.0
environment:
- AIRFLOW__CORE__EXECUTOR=LocalExecutor
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW__CORE__FERNET_KEY=FB0o_zt4e3Ziq3LdUUO7F2Z95cvFFx16hU8jTeR1ASM=
- AIRFLOW__CORE__LOAD_EXAMPLES=True
- AIRFLOW__CORE__LOGGING_LEVEL=INFO
volumes:
- /home/x/docker/airflow/dags:/opt/airflow/dags
- /home/x/docker/airflow/airflow-data/logs:/opt/airflow/logs
- /home/x/docker/airflow/airflow-data/plugins:/opt/airflow/plugins
- /home/x/docker/airflow/airflow-data/airflow.cfg:/opt/airlfow/airflow.cfg
depends_on:
- db
services:
db:
image: postgres:12
#image: postgres:12.1-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=airflow
- POSTGRES_PORT=9501
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- 9501:9501
command:
- -p 9501
airflow-init:
<< : *airflow-common
container_name: airflow_init
entrypoint: /bin/bash
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
command:
- -c
- airflow users list || ( airflow db init &&
airflow users create
--role Admin
--username airflow
--password airflow
--email airflow#airflow.com
--firstname airflow
--lastname airflow )
restart: on-failure
airflow-webserver:
<< : *airflow-common
command: airflow webserver
ports:
- 9500:8080
container_name: airflow_webserver
environment:
- AIRFLOW_USERNAME=airflow
- AIRFLOW_PASSWORD=airflow
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
airflow-scheduler:
<< : *airflow-common
command: airflow scheduler
container_name: airflow_scheduler
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
Solved by following this docker-compose.yaml file:
https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml
And instead of trying to tweak the ports of postgres (and redis) used the "expose" option, which avoids conflicts with other containers on the same host.
So not:
environment:
POSTGRES_PORT: 9501
ports:
- 9501:9501
But: run it (internally) with the default ports and do not try to share them external:
expose:
- 5432
Still not sure what was the problem with using the higher ports. It may be some default fallback to sqlite when the configured DB for some reason cannot be connected.

docker compose phpmyadmin php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution

I am trying to set up a docker-pod with laravel, mariadb, nginx, redis and phpmyadmin. The laravel webspace works finde but if i switch to port 10081 like configured in the docker-compose.yml i am not able to login with the root account.
it sais " mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution"
i already tried to configure a "my-network" which links all of the container, but if i understand docker right there is already a "defaul" network which does this. It didnt change the error message anyway.
here is my full docker-compose file
version: "3.8"
services:
redis:
image: redis:6.0-alpine
expose:
- "6380"
db:
image: mariadb:10.4
ports:
- "3307:3306"
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- db-data:/var/lib/mysql
nginx:
image: nginx:1.19-alpine
build:
context: .
dockerfile: ./docker/nginx.Dockerfile
restart: always
depends_on:
- php
ports:
- "10080:80"
networks:
- default
environment:
VIRTUAL_HOST: cockpit.example.de
volumes:
- ./docker/nginx.conf:/etc/nginx/nginx.conf:ro
- ./public:/app/public:ro
php:
build:
target: dev
context: .
dockerfile: ./docker/php.Dockerfile
working_dir: /app
env_file: .env
restart: always
expose:
- "9000"
depends_on:
- composer
- redis
- db
volumes:
- ./:/app
- ./docker/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
links:
- db:mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 10081:80
restart: always
environment:
PMA_HOST : db
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
depends_on:
- db
#user: "109:115"
links:
- db:mysql
node:
image: node:12-alpine
working_dir: /app
volumes:
- ./:/app
command: sh -c "npm install && npm run watch"
composer:
image: composer:1.10
working_dir: /app
#environment:
#SSH_AUTH_SOCK: /ssh-auth.sock
volumes:
- ./:/app
#- "$SSH_AUTH_SOCK:/ssh-auth.sock"
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
command: composer install --ignore-platform-reqs --no-scripts
volumes:
db-data:
Make sure you have defined all attributes correctly for phpmyadmin container, in the current case there was the absence of -network definition
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: always
ports:
# 8080 is the host port and 80 is the docker port
- 8080:80
environment:
- PMA_ARBITRARY:1
- PMA_HOST:mysql
- MYSQL_USERNAME:root
- MYSQL_ROOT_PASSWORD:secret
depends_on:
- mysql
networks:
# define your network where all containers are connected to each other
- laravel
volumes:
# define directory path where you shall store your persistent data and config
# files of phpmyadmin
- ./docker/phpmyadmin
Maybe your container cannot start because its volume contains incompatible data. It can happen if you downgrade the version of mysql or mariadb image.
You can resolve the problem if you remove the volume and import the database again. Maybe you have to create a backup first.

Docker container shuts down giving 'data directory has wrong ownership' error when executed in windows 10

I have my docker installed in Windows. I am trying to install this application. It has given me the following docker-compose.yml file:
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "8085:80"
networks:
- attendizenet
volumes:
- .:/usr/share/nginx/html/attendize
depends_on:
- php
php:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
networks:
- attendizenet
php-worker:
build:
context: .
dockerfile: Dockerfile-php
depends_on:
- db
- maildev
- redis
volumes:
- .:/usr/share/nginx/html/attendize
command: php artisan queue:work --daemon
networks:
- attendizenet
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- ./docker/pgdata:/var/lib/postgresql/data
networks:
- attendizenet
maildev:
image: djfarrelly/maildev
ports:
- "1080:80"
networks:
- attendizenet
redis:
image: redis
networks:
- attendizenet
networks:
attendizenet:
driver: bridge
All the installation goes well, but the PostgreSQL container stops after starting for a moment giving following error.
2018-03-07 08:24:47.927 UTC [1] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2018-03-07 08:24:47.927 UTC [1] HINT: The server must be started by the user that owns the data directory
A simple PostgreSQL container from Docker Hub works smoothly, but the error occurs when we try to attach a volume to the container.
I am new to docker, so please ignore usage of terms wrongly.
This is a documented problem with the Postgres Docker image on Windows [1][2][3][4]. Currently, there doesn't appear to be a way to correctly mount Windows directories as volumes. You could instead use a persistent Docker volume, for example:
db:
image: postgres
environment:
- POSTGRES_USER=attendize
- POSTGRES_PASSWORD=attendize
- POSTGRES_DB=attendize
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- attendizenet
volumes:
pgdata:
Other things that didn't work:
Set PGDATA to a subdirectory (See PGDATA Setting)
environment:
- PGDATA=/var/lib/postgresql/data/mnt
volumes:
- ./pgdata:/var/lib/postgresql/data
Use a Bind Mount (docker-compose 3.2)
volumes:
- type: bind
source: ./pgdata
target: /var/lib/postgresql/data
Running as POSTGRES_USER=root
More Information:
GitHub
data directory "/var/lib/postgresql/data" has wrong ownership
Docker Forums
postgresql-data-pgdata-has-wrong-ownership
postgres-to-work-on-persistent-windows-mount
Please refer reinierkors' answer from here. The answer is as follows copied as is from the link here for reader's convenience and works for me
I solved this by mapping my local volume one directory below the one Postgres needs:
version: '3'
services:
postgres:
image: postgres
restart: on-failure
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_DB=postgres
volumes:
- ./postgres_data:/var/lib/postgresql
ports:
- 5432:5432
I was having the same issue after downgrading my Docker from WSL 2 to WSL 1 and what Thomas Taylor's pertaining, I solved the issue by using named volume.
version: '3.8'
services:
postgres:
image: timescale/timescaledb:latest-pg12
...
volumes:
- pgdata:/var/lib/postgresql/data
...
volumes:
pgdata:
Map the local volume (e.g. C:\docker\pgdata) to one level (one directory) above what PostgreSQL needs. You can also do it from command line when starting the docker:
docker run -itd -e POSTGRES_USER=pguser -e POSTGRES_PASSWORD=pgpasswd \
-e PGDATA=/var/lib/postgresql/data/pgdata -p 5432:5432 \
-v c:\docker\pgdata:/var/lib/postgresql --name postgresql postgres
I met this issue when re-installed docker and used wsl-1 backend.
solution: switch docker to wsl-2 backend.
Even i had the problem i had to copy the data dir at regular intervals.
docker cp <container-name>:/var/lib/postgresql/data C:/docker/volumes/postgres
Owner for the data folder in postgres inside the container is Postgres user. Your current user may not have access privilege in the mounted folder. You need to give all permissions according to the requirements by given command below :
chmod 777 ./docker/pgdata
If this command is not helping to resolve this issue please refer the following link to do the user mapping from inside the container to outside the container.
https://docs.docker.com/engine/security/userns-remap/#prerequisites

How to connect to localhost postgres database from docker container?

I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards