Unable to start FastAPI server with postgresql using docker compose - postgresql

I am creating a FastAPI server with simple CRUD functionalities with Postgresql as database. Everything works well in my local environment. However, when I tried to make it run in containers using docker-compose up, it failed. I was getting this error:
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 | (Background on this error at: https://sqlalche.me/e/14/e3q8)
networks_lab2_rest_api_1 exited with code 1
The directory structure:
├── Dockerfile
├── README.md
├── __init__.py
├── app
│   ├── __init__.py
│   ├── __pycache__
│   ├── crud.py
│   ├── database.py
│   ├── main.py
│   ├── models.py
│   ├── object_store
│   └── schemas.py
├── docker-compose.yaml
├── requirements.txt
├── tests
│   ├── __init__.py
│   ├── __pycache__
│   ├── assets
│   ├── test_create.py
│   ├── test_delete.py
│   ├── test_file.py
│   ├── test_get.py
│   ├── test_heartbeat.py
│   └── test_put.py
└── venv
├── bin
├── include
├── lib
└── pyvenv.cfg
My docker-compose.yaml
version: "3"
services:
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: ${DATABASE_TYPE}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
POSTGRES_DB: ${DATABASE_NAME}
ports:
- "5432:5432"
rest_api:
build: .
command: uvicorn app.main:app --host 0.0.0.0
env_file:
- ./.env
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
My Dockerfile for the fastAPI server (under ./app)
FROM python:3.8-slim-buster
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
My database.py
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from dotenv import load_dotenv
import os
# def create_connection_string():
# load_dotenv()
# db_type = os.getenv("DATABASE_TYPE")
# username = os.getenv("DATABASE_USERNAME")
# password = os.getenv("DATABASE_PASSWORD")
# host = os.getenv("DATABASE_HOST")
# port = os.getenv("DATABASE_PORT")
# name = os.getenv("DATABASE_NAME")
#
# return "{0}://{1}:{2}#{3}/{4}".format(db_type, username, password, host, name)
SQLALCHEMY_DATABASE_URI = "postgresql://postgres:postgres#db:5432/postgres"
engine = create_engine(
SQLALCHEMY_DATABASE_URI
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
My main.py
from typing import List, Optional
import os, base64, shutil
from functools import wraps
from fastapi import Depends, FastAPI, HTTPException, UploadFile, File, Request, Header
from fastapi.responses import FileResponse
from sqlalchemy.orm import Session
from . import crud, models, schemas
from .database import SessionLocal, engine
models.Base.metadata.create_all(bind=engine)
app = FastAPI()
SECRET_KEY = os.getenv("SECRET")
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
def check_request_header(x_token: str = Header(...)):
if x_token != SECRET_KEY:
raise HTTPException(status_code=401, detail="Unauthorized")
# endpoints
#app.get("/heartbeat", dependencies=[Depends(check_request_header)], status_code=200)
def heartbeat():
return "The connection is up"
A more complete log is:
Creating db_1 ... done
Creating rest_api_1 ... done
Attaching to db_1, rest_api_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
...
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... UTC
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... sh: locale: not found
db_1 | 2021-09-29 18:13:35.027 UTC [31] WARNING: no usable system locales were found
rest_api_1 | Traceback (most recent call last):
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3240, in _wrap_pool_connect
rest_api_1 | return fn()
...
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 584, in connect
rest_api_1 | return self.dbapi.connect(*cargs, **cparams)
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | psycopg2.OperationalError: could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 |
rest_api_1 | The above exception was the direct cause of the following exception:
rest_api_1 |
rest_api_1 | Traceback (most recent call last):
rest_api_1 | File "/usr/local/bin/uvicorn", line 8, in <module>
rest_api_1 | sys.exit(main())
...
est_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 584, in connect
rest_api_1 | return self.dbapi.connect(*cargs, **cparams)
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 | (Background on this error at: https://sqlalche.me/e/14/e3q8)
rest_api_1 exited with code 1
db_1 | ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 |
db_1 | Success. You can now start the database server using:
...
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: starting PostgreSQL 13.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2021-09-29 18:13:36.328 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2021-09-29 18:13:36.332 UTC [48] LOG: database system was shut down at 2021-09-29 18:13:36 UTC
db_1 | 2021-09-29 18:13:36.336 UTC [1] LOG: database system is ready to accept connections
I have gone through a very extensive search and read docs/tutorials about running FastAPI server and Postgresql with docker-compose, such as
https://testdriven.io/blog/fastapi-docker-traefik/
https://github.com/AmishaChordia/FastAPI-PostgreSQL-Docker/blob/master/FastAPI/docker-compose.yml
https://www.jeffastor.com/blog/pairing-a-postgresql-db-with-your-dockerized-fastapi-app
Their approach is the same as mine but it just keeps giving me this Connection refused Is the server running on host "db" (172.29.0.2) and accepting TCP/IP connections on port 5432? error message ...
Can anyone help me out here? Any help will be appreciated !!

First, the SQLALCHEMY_DATABASE_URI in database.py should match the user, password and database name suplied in Your docker-compose.yaml. Ensure that You are running docker-compose up with correct environ. In Your case, the environ for docker-compose up should be:
DATABASE_TYPE=postgres
DATABASE_PASSWORD=postgres
DATABASE_NAME=postgres
But I think that Your problem is somewhere else. Even if you declare Your API service as depends_on: - db, the postgres server can be not ready yet. depends_on ensures that the target image will not be instantiated before the referenced one, but does not ensure anything more. It takes some time for postgres server to be initialized, up and runing inside running container, and if Your API will try to connect before that happens, it will fail.
The common and simplest solution is to write a bunch of code that will check over and over if database is up and running before actual connection happen. As You are not supplying the whole traceback (actually, You have replaced the most important part with ...) I can only guess on what line of Your code the connection event is triggered. I would recommend modifying Your database.py to look like (not tested, may require some adjustments):
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from dotenv import load_dotenv
import os
import time
def wait_for_db(db_uri):
"""checks if database connection is established"""
_local_engine = create_engine(db_uri)
_LocalSessionLocal = sessionmaker(
autocommit=False, autoflush=False, bind=_local_engine
)
up = False
while not up:
try:
# Try to create session to check if DB is awake
db_session = _LocalSessionLocal()
# try some basic query
db_session.execute("SELECT 1")
db_session.commit()
except Exception as err:
print(f"Connection error: {err}")
up = False
else:
up = True
time.sleep(2)
SQLALCHEMY_DATABASE_URI = "postgresql://postgres:postgres#db:5432/postgres"
wait_for_db(SQLALCHEMY_DATABASE_URI)
engine = create_engine(
SQLALCHEMY_DATABASE_URI
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
A more sophisticated solution would be playing with docker-compose healtchecks (v2 only). For docker-compose v3, they recommend doing it manually similar to the solution presented above.
To improve this solution, include wait_for_db in a python, commandline script and run it in some kind of image entrypoint in a prestart stage. You will need a prestart stage in an entrypoint anyway for running migrations (You do include migrations in Your projects, right?)

You could also handle retries by using Docker's restart mechanism. If you use max attempts and the right dela, you can make it so the DB will most likely be ready by the second try while preventing infinite restart.
rest_api:
...
deploy:
restart_policy:
condition: on-failure
delay: 5s # default
max_attempts: 5
...
Note that I'm not a Docker expert, but this seems like it better aligns with the containers as cattle instead of pets paradigm. Why add complexity to the application when the issue can be handled by existing functionality in a higher layer?

Related

Can't connect PostGis to database and server

I followed these steps to set up QWC services https://github.com/qwc-services/qwc-services-core#quick-start and I can run the demo. But if load my own QGIS project, I receive the following error message:
qwc-qgis-server_1 | 07:50:07 WARNING Server[99]: <ServerException>Layer(s) not valid</ServerException>
qwc-qgis-server_1 |
qwc-qgis-server_1 | 07:50:07 WARNING ClearCapabilities[99]: Cached cleared : /data/MeasurementDemo.qgs
qwc-qgis-server_1 | 07:50:07 WARNING PostGIS[99]: Connection to database failed
qwc-qgis-server_1 | could not connect to server: No such file or directory
qwc-qgis-server_1 | Is the server running locally and accepting
qwc-qgis-server_1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
qwc-qgis-server_1 |
qwc-qgis-server_1 | 07:50:07 CRITICAL Server[99]: Error, Layer(s) measurement_b46e976f_2d0f_4bf0_942a_9d9462b40c3e not valid in project /data/MeasurementDemo.qgs
qwc-qgis-server_1 | 07:50:07 WARNING Server[99]: <ServerException>Layer(s) not valid</ServerException>
qwc-qgis-server_1 |
qwc-config-service_1 | [2022-01-04 07:50:09,360] WARNING in config_generator: Skipping theme item '': Could not get capabilities for /ows/MeasurementDemo
qwc-config-service_1 | [2022-01-04 07:50:19,468] CRITICAL in config_generator: The generation of the configuration files resulted in a failure
qwc-config-service_1 | [2022-01-04 07:50:19,468] CRITICAL in config_generator: The configuration files were not updated!
qwc-config-service_1 | [2022-01-04 07:50:20,856] CRITICAL in config_generator: The generation of the permission files resulted in a failure.
qwc-config-service_1 | [2022-01-04 07:50:20,857] CRITICAL in config_generator: The permission files were not updated!
qwc-config-service_1 | [pid: 15|app: 0|req: 18/18] 172.18.0.11 () {30 vars in 408 bytes} [Tue Jan 4 07:50:05 2022] POST /generate_configs?tenant=default => generated 2881 bytes in 15083 msecs (HTTP/1.1 200) 2 headers in 81 bytes (1 switches on core 0)
As the error is quite similar to this question: PostgreSQL: Why psql can't connect to server?, I followed the answers but with no result.
ps -ef | grep postgres gives me the following result:
postgres 203911 1 0 07:35 ? 00:00:00 /usr/lib/postgresql/13/bin/postgres -D /var/lib/postgresql/13/main -c config_file=/etc/postgresql/13/main/postgresql.conf
Also I found the socket in
/var/run/postgresql/.s.PGSQL.5432
And I run the command
psql -h /var/run/postgresql/ GeoDB
But without result. After that I checked the ph_hba.conf File:
# "local" is for Unix domain socket connections only
local all all peer
Running the command pg_lsclusters gives me:
Ver Cluster Port Status Owner Data directory Log file
13 main 5432 online postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log
Also after restarting the pg_ctlcluster and PostgreSQL the error remained the same.
Edit 1
After the answer from cnaimi I checked the postgresql.confFile:
# - Connection Settings -
#listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories
# (change requires restart)
#unix_socket_group = '*' # (change requires restart)
#unix_socket_permissions = 0777 # begin with 0 to use octal notation
# (change requires restart)
#bonjour = off # advertise server via Bonjour
# (change requires restart)
#bonjour_name = '' # defaults to the computer name
# (change requires restart)
But I can't find an error there as the port is 5432 and it listen to all adresses.
Edit 2
During my search I found several pg_service.conf Files:
./qwc-services/qwc-docker/wsgi-service/pg_service.conf
./qwc-services/qwc-docker/qgis-server/pg_service.conf
./qwc-services/qwc-docker/postgis/pg_service.conf
./qwc-services/qwc-docker/pg_service.conf
Each if them contain one or more credentials for databases like the one below:
[qwc_geodb]
host=qwc-postgis
port=5432
dbname=qwc_demo
user=qwc_service
password=qwc_service
sslmode=disable
The port is in all files correct, as far as I saw. But of course the db name and user/password are wrong. Does this could cause the error? Or does QWS get the credentials through the .qgs file?
Edit 3
Thanks to the hints from Devdatta Tengshe I set the host for PostgreSQL to 127.0.0.1. By using sudo docker-compose ps one can see the used container and their ports:
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------------
qwc-docker_qwc-admin-gui_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5031->9090/tcp
qwc-docker_qwc-api-gateway_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:8088->80/tcp,:::8088->80/tcp
qwc-docker_qwc-auth-service_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5017->9090/tcp
qwc-docker_qwc-config-service_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5010->9090/tcp
qwc-docker_qwc-data-service_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5012->9090/tcp
qwc-docker_qwc-elevation-service_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5002->9090/tcp
qwc-docker_qwc-fulltext-search-service_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5011->9090/tcp
qwc-docker_qwc-map-viewer_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5030->9090/tcp
qwc-docker_qwc-mapinfo-service_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5016->9090/tcp
qwc-docker_qwc-ogc-service_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5013->9090/tcp
qwc-docker_qwc-permalink-service_1 /bin/sh -c uwsgi --http-so ... Up 127.0.0.1:5001->9090/tcp
qwc-docker_qwc-postgis_1 docker-entrypoint.sh postgres Up (healthy) 127.0.0.1:5439->5432/tcp
qwc-docker_qwc-qgis-server_1 /sbin/my_init Up 127.0.0.1:8001->80/tcp
qwc-docker_qwc-solr_1 docker-entrypoint.sh solr- ... Up 127.0.0.1:8983->8983/tcp
Can you check the postgres.conf file located in
/etc/postgresql/13/main/postgresql.conf
specially the parameter listen_address
Maybe you have to specify from which host you are listening.
But if the demo example is working the database configuration should be ok.
You can also check the port for postgres on postgres.conf and validate it's 5432.
There are a couple of things that need to be fixed to get this working.
I'm assuming that you have the Postgres Server running on the host machine, and not within any Docker container.
When you configured your QGIS Map file, you probably connected to localhost, and this information got saved in the .qgs file.
This is why your first error message says that it trying to connect to localhost, and no server was found. This error was thrown within the qwc docker container.
This error is occuring, because QGIS server (within the docker container) is not able to connect to the postgres server which is running on the host, using 'localhost' as the hostname
To solve this, you need to do the following:
In QGIS, connect to the Postgres Server using 127.0.0.1 and not localhost.
Save your qgs file using this new connection.
When you run the docker container for qwc, use --network="host" as the commandline parameter.
See: From inside of a Docker container, how do I connect to the localhost of the machine?
After this, the qgis server (within docker container) should be able to connect to the Postgres Server running on your host, using 127.0.0.1 as IP address.

How to connect the postgres database in docker

I have created a Rasa Chatbot that asks user information and store it in the postgres database. Locally it works. I have been trying to do that in the docker but it is not working. I'm new to docker. could anyone help me. Thanks in advance
Docker-compose.yml
version: "3.0"
services:
rasa:
container_name: rasa
image: rasa/rasa:2.8.1-full
ports:
- 5005:5005
volumes:
- ./:/app
command:
- run
- -m
- models
- --enable-api
- --cors
- "*"
depends_on:
- action-server1
- db
action-server1:
container_name: "action-server1"
build:
context: actions
volumes:
- ./actions:/app/actions
ports:
- "5055:5055"
networks:
- shan_network
db:
image: "postgres"
environment:
POSTGRESQL_USERNAME: "postgres"
POSTGRESQL_PASSWORD: ""
POSTGRESQL_DATABASE: "postgres"
POSTGRES_HOST_AUTH_METHOD: "trust"
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
db-data:
Logs: All services are running in logs and I checked in docker also postgres is running.
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2021-08-05 08:21:45.685 UTC [1] LOG: starting PostgreSQL 13.3 (Debian 13.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debia
n 8.3.0-6) 8.3.0, 64-bit
db_1 | 2021-08-05 08:21:45.686 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2021-08-05 08:21:45.686 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2021-08-05 08:21:45.699 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2021-08-05 08:21:45.712 UTC [26] LOG: database system was shut down at 2021-08-05 08:21:25 UTC
db_1 | 2021-08-05 08:21:45.722 UTC [1] LOG: database system is ready to accept connections
Error:
action-server1 | warnings.warn(
action-server1 | Exception occurred while handling uri: 'http://action-server1:5055/webhook'
action-server1 | Traceback (most recent call last):
action-server1 | File "/opt/venv/lib/python3.8/site-packages/sanic/app.py", line 973, in handle_request
action-server1 | response = await response
action-server1 | File "/opt/venv/lib/python3.8/site-packages/rasa_sdk/endpoint.py", line 104, in webhook
action-server1 | result = await executor.run(action_call)
action-server1 | File "/opt/venv/lib/python3.8/site-packages/rasa_sdk/executor.py", line 398, in run
action-server1 | action(dispatcher, tracker, domain)
**action-server1 | File "/app/actions/actions.py", line 148, in run
action-server1 | connection = psycopg2.connect(database="postgres", user='postgres', password='password',port='5432'
action-server1 | File "/opt/venv/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
action-server1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
action-server1 | psycopg2.OperationalError: could not connect to server: No such file or directory
action-server1 | Is the server running locally and accepting
action-server1 | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?**
rasa | 2021-08-05 08:25:13 ERROR rasa.core.processor - Encountered an exception while running action 'action_save'.Bot will continue, but
the actions events are lost. Please check the logs of your action server for more information.
rasa | Traceback (most recent call last):
rasa | File "/opt/venv/lib/python3.8/site-packages/rasa/core/actions/action.py", line 685, in run
rasa | response = await self.action_endpoint.request(
rasa | File "/opt/venv/lib/python3.8/site-packages/rasa/utils/endpoints.py", line 172, in request
rasa | raise ClientResponseError(
rasa | rasa.utils.endpoints.ClientResponseError: 500, Internal Server Error, body='b'<!DOCTYPE html><meta charset=UTF-8><title>500 \xe2\x80\x9
4 Internal Server Error</title><style>html { font-family: sans-serif }</style>\n<h1>\xe2\x9a\xa0\xef\xb8\x8f 500 \xe2\x80\x94 Internal Server Error</h1><p>
The server encountered an internal error and cannot complete your request.\n''
rasa |
rasa | The above exception was the direct cause of the following exception:
rasa |
rasa | Traceback (most recent call last):
rasa | File "/opt/venv/lib/python3.8/site-packages/rasa/core/processor.py", line 772, in _run_action
rasa | events = await action.run(
rasa | File "/opt/venv/lib/python3.8/site-packages/rasa/core/actions/action.py", line 709, in run
rasa | raise RasaException("Failed to execute custom action.") from e
rasa | rasa.shared.exceptions.RasaException: Failed to execute custom action.
Think of containers in the stack as of different physical or virtual machines. Your database is on one host and the chatbot is on another. Naturally the chatbot cannot find /var/run/postgresql/.s.PGSQL.5432 locally because it's in another container (as if on another computer), so you need to use network connection to reach it:
# If host is not given it uses unix socket which you appear to have locally,
# thus add it here:
connection = psycopg2.connect(database="postgres",
user='postgres',
password='password',
host='db', # name of the service in the stack
port='5432')
Also, your action-server1 service is configured to be in shan_network:
action-server1:
networks:
- shan_network
Therefore, the action-server1 currently has no network access to the other services in this stack. db and rasa have no networks configured and because of that they use the default network, which is automatically created for you by docker-compose. This is as if you would configure those services as following:
db:
image: "postgres"
networks:
- default
If you wish action-server1 to appear in several networks and thus be able to reach services both in this stack and whatever is in shan_network, you need to add the service to the default network:
action-server1:
networks:
- shan_network
- default
Alternatively, if you are unsure why there is a shan_network at all, you can simply remove the network key from the action-server1 service.

docker-compose postgres restart after running scripts in docker-entrypoint-initdb.d

I have a simple Docker Compose file to initialize a Postgres DB Instance:
version: '3.8'
services:
my-database:
container_name: my-database
image: library/postgres:13.1
volumes:
- ./db/init-my-database.sql:/docker-entrypoint-initdb.d/init-db-01.sql
environment:
- POSTGRES_DB=my-database
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=password
ports:
- 10040:5432
restart: always
networks:
- app-network
And my script init-my-database.sql looks like this:
DROP DATABASE IF EXISTS myschema;
CREATE DATABASE myschema;
-- Make sure we're using our `myschema` database
\c myschema;
CREATE SEQUENCE IF NOT EXISTS recipient_seq
START WITH 1
MINVALUE 1
MAXVALUE 9223372036854775807
CACHE 1;
CREATE TABLE IF NOT EXISTS recipient
(
id BIGINT NOT NULL,
recipient_id VARCHAR(255) NOT NULL,
first_name VARCHAR(255) NOT NULL,
middle_name VARCHAR(255) NOT NULL,
last_name VARCHAR(255) NOT NULL,
CONSTRAINT recipient_pk PRIMARY KEY (id)
);
When I tail the Docker logs I do see that the initialization script is being called. However, towards the end the database is shutdown and restarted --> "received fast shutdown request"
Why is the database restarted at the end? As the database and table created in the script is not available anymore.
Complete Docker Logs -->
my-database | The files belonging to this database system will be owned by user "postgres".
my-database | This user must also own the server process.
my-database |
my-database | The database cluster will be initialized with locale "en_US.utf8".
my-database | The default database encoding has accordingly been set to "UTF8".
my-database | The default text search configuration will be set to "english".
my-database |
my-database | Data page checksums are disabled.
my-database |
my-database | fixing permissions on existing directory /var/lib/postgresql/data ... ok
my-database | creating subdirectories ... ok
my-database | selecting dynamic shared memory implementation ... posix
my-database | selecting default max_connections ... 100
my-database | selecting default shared_buffers ... 128MB
my-database | selecting default time zone ... Etc/UTC
my-database | creating configuration files ... ok
my-database | running bootstrap script ... ok
my-database | performing post-bootstrap initialization ... ok
my-database | syncing data to disk ... ok
my-database |
my-database | initdb: warning: enabling "trust" authentication for local connections
my-database | You can change this by editing pg_hba.conf or using the option -A, or
my-database | --auth-local and --auth-host, the next time you run initdb.
my-database |
my-database | Success. You can now start the database server using:
my-database |
my-database | pg_ctl -D /var/lib/postgresql/data -l logfile start
my-database |
my-database | waiting for server to start....2020-12-22 23:42:50.083 UTC [45] LOG: starting PostgreSQL 13.1 (Debian 13.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
my-database | 2020-12-22 23:42:50.085 UTC [45] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
my-database | 2020-12-22 23:42:50.091 UTC [46] LOG: database system was shut down at 2020-12-22 23:42:49 UTC
my-database | 2020-12-22 23:42:50.096 UTC [45] LOG: database system is ready to accept connections
my-database | done
my-database | server started
my-database | CREATE DATABASE
my-database |
my-database |
my-database | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init-db-01.sql
my-database | psql:/docker-entrypoint-initdb.d/init-db-01.sql:1: NOTICE: database "mergeletter" does not exist, skipping
my-database | DROP DATABASE
my-database | CREATE DATABASE
my-database | You are now connected to database "mergeletter" as user "admin".
my-database | CREATE SEQUENCE
my-database | CREATE TABLE
my-database |
my-database |
my-database | 2020-12-22 23:42:50.515 UTC [45] LOG: received fast shutdown request
my-database | waiting for server to shut down....2020-12-22 23:42:50.516 UTC [45] LOG: aborting any active transactions
my-database | 2020-12-22 23:42:50.521 UTC [45] LOG: background worker "logical replication launcher" (PID 52) exited with exit code 1
my-database | 2020-12-22 23:42:50.521 UTC [47] LOG: shutting down
my-database | 2020-12-22 23:42:50.541 UTC [45] LOG: database system is shut down
my-database | done
my-database | server stopped
my-database |
my-database | PostgreSQL init process complete; ready for start up.
my-database |
my-database | 2020-12-22 23:42:50.648 UTC [1] LOG: starting PostgreSQL 13.1 (Debian 13.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
my-database | 2020-12-22 23:42:50.649 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
my-database | 2020-12-22 23:42:50.649 UTC [1] LOG: listening on IPv6 address "::", port 5432
my-database | 2020-12-22 23:42:50.652 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
my-database | 2020-12-22 23:42:50.657 UTC [73] LOG: database system was shut down at 2020-12-22 23:42:50 UTC
my-database | 2020-12-22 23:42:50.663 UTC [1] LOG: database system is ready to accept connections
Put simply, this happens on purpose; the author does it as part of the initialization.
It looks like some answers can be found in the image's entrypoint shell script:
_main() {
# if first arg looks like a flag, assume we want to run postgres server
if [ "${1:0:1}" = '-' ]; then
set -- postgres "$#"
fi
if [ "$1" = 'postgres' ] && ! _pg_want_help "$#"; then
docker_setup_env
# setup data directories and permissions (when run as root)
docker_create_db_directories
if [ "$(id -u)" = '0' ]; then
# then restart script as postgres user
exec gosu postgres "$BASH_SOURCE" "$#"
fi
# only run initialization on an empty data directory
if [ -z "$DATABASE_ALREADY_EXISTS" ]; then
docker_verify_minimum_env
# check dir permissions to reduce likelihood of half-initialized database
ls /docker-entrypoint-initdb.d/ > /dev/null
docker_init_database_dir
pg_setup_hba_conf
# PGPASSWORD is required for psql when authentication is required for 'local' connections via pg_hba.conf and is otherwise harmless
# e.g. when '--auth=md5' or '--auth-local=md5' is used in POSTGRES_INITDB_ARGS
export PGPASSWORD="${PGPASSWORD:-$POSTGRES_PASSWORD}"
docker_temp_server_start "$#"
docker_setup_db
docker_process_init_files /docker-entrypoint-initdb.d/*
docker_temp_server_stop
unset PGPASSWORD
echo
echo 'PostgreSQL init process complete; ready for start up.'
echo
else
echo
echo 'PostgreSQL Database directory appears to contain a database; Skipping initialization'
echo
fi
fi
exec "$#"
}
As for "why?" I think it's because of desire to run as a less-privileged user.
You can "solve" the problem by specifying a volume in the Compose file like so:
volumes:
- ./data/pgsql:/var/lib/postgresql/data
Then, it will skip the routine to ensure DATABASE_ALREADY_EXISTS.
Or, if that's not helpful--you can dig into the entrypoint script a bit more.
Thanks for the above tip from #TorEHagermann (https://stackoverflow.com/a/65417566/5631863) I was able to resolve the problem.
The solution was to include the following:
volumes:
- ./data/pgsql:/var/lib/postgresql/data
Also, in my case, I needed to clear and refresh the data every time I started the container. So, I included the following in my shell script to delete the drive /data/pgsql:
rm -rf data
With that change, my docker-compose file looks like:
version: '3.8'
services:
my-database:
container_name: my-database
image: library/postgres:13.1
volumes:
- ./db/init-my-database.sql:/docker-entrypoint-initdb.d/init-db-01.sql
- ./data/pgsql:/var/lib/postgresql/data
environment:
- POSTGRES_DB=my-database
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=password
ports:
- 10040:5432
restart: always
networks:
- app-network
As mentioned above, the restart is intentional. The reason is that changes in some of the database server settings need restart to take effect. The entrypoint init script is a convenient place for the queries that change these settings.
e.g.
ALTER SYSTEM SET max_connections = 300;

Docker | Postgres Database is uninitialized and superuser password is not specified

I am using docker-compose.yml to create multiple running containers but failing to start Postgres docker server, with following logs and yes I have searched many related SO posts, but they didn't helped me out.
Creating network "complex_default" with the default driver
Creating complex_server_1 ... done
Creating complex_redis_1 ... done
Creating complex_postgres_1 ... done
Attaching to complex_postgres_1, complex_redis_1, complex_server_1
postgres_1 | Error: Database is uninitialized and superuser password is not specified.
postgres_1 | You must specify POSTGRES_PASSWORD to a non-empty value for the
postgres_1 | superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
postgres_1 |
postgres_1 | You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
postgres_1 | connections without a password. This is *not* recommended.
postgres_1 |
postgres_1 | See PostgreSQL documentation about "trust":
postgres_1 | https://www.postgresql.org/docs/current/auth-trust.html
complex_postgres_1 exited with code 1
below is my docker-compose configuration:
version: '3'
services:
postgres:
image: 'postgres:11-alpine'
redis:
image: 'redis:latest'
server:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /app/node_modules
- ./server:/app
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
as well as package.json inside server directory is following:
{
"dependencies": {
"body-parser": "^1.19.0",
"cors": "^2.8.4",
"express": "^4.16.3",
"nodemon": "^2.0.4",
"pg": "7.4.3",
"redis": "^2.8.0"
},
"scripts": {
"dev": "nodemon",
"start": "node index.js"
}
}
also for better consideration, I have attached my hands-on project structure:
A year ago it were actually working fine, Does anyone have any idea, what's going wrong here inside my docker-compose file now.
A year ago it were actually working fine, Does anyone have any idea, what's going wrong here inside my docker-compose file now.
Seems like you pulled the fresh image, where in the new image you should specify Postgres user password. You can look into Dockerhub, the image is update one month ago
postgress-11-alpine
db:
image: postgres:11-alpine
restart: always
environment:
POSTGRES_PASSWORD: example
As the error message is self expalinatory
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
or POSTGRES_HOST_AUTH_METHOD=trust use this which is not recommended.
You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
connections without a password. This is *not* recommended.
POSTGRES_PASSWORD
This environment variable is required for you to use the PostgreSQL image. It must not be empty or undefined. This environment variable sets the superuser password for PostgreSQL. The default superuser is defined by the POSTGRES_USER environment variable.
Environment Variables
#Adiii yes, you are nearly right, so I have to explicitly mentioned the environment also for the postgres image but with no db parent tag.
So here, I am explicitly mentioning the docker-compose.yaml config to help others for better understanding, also now I am using recent stable postgres image version 12-alpine, currently latest is postgres:12.3
version: '3'
services:
postgres:
image: 'postgres:12-alpine'
environment:
POSTGRES_PASSWORD: postgres_password
redis:
image: 'redis:latest'
server:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /app/node_modules
- ./server:/app
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
and so after docker-compose up the creating and running logs were as following:
PS E:\docker\complex> docker-compose up
Creating network "complex_default" with the default driver
Creating complex_postgres_1 ... done
Creating complex_redis_1 ... done
Creating complex_server_1 ... done
Attaching to complex_redis_1, complex_postgres_1, complex_server_1
postgres_1 | The files belonging to this database system will be owned by user "postgres".
postgres_1 | This user must also own the server process.
postgres_1 |
postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
postgres_1 | The default database encoding has accordingly been set to "UTF8".
postgres_1 | The default text search configuration will be set to "english".
postgres_1 |
postgres_1 | Data page checksums are disabled.
postgres_1 |
postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1 | creating subdirectories ... ok
postgres_1 | selecting dynamic shared memory implementation ... posix
postgres_1 | selecting default max_connections ... 100
postgres_1 | selecting default shared_buffers ... 128MB
postgres_1 | selecting default time zone ... UTC
redis_1 | 1:C 05 Aug 2020 14:24:48.692 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 05 Aug 2020 14:24:48.692 # Redis version=6.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 05 Aug 2020 14:24:48.692 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
postgres_1 | creating configuration files ... ok
redis_1 | 1:M 05 Aug 2020 14:24:48.693 * Running mode=standalone, port=6379.
redis_1 | 1:M 05 Aug 2020 14:24:48.693 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 05 Aug 2020 14:24:48.694 # Server initialized
redis_1 | 1:M 05 Aug 2020 14:24:48.694 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 05 Aug 2020 14:24:48.694 * Ready to accept connections
postgres_1 | running bootstrap script ... ok
server_1 |
server_1 | > # dev /app
server_1 | > nodemon
server_1 |
postgres_1 | performing post-bootstrap initialization ... sh: locale: not found
postgres_1 | 2020-08-05 14:24:50.153 UTC [29] WARNING: no usable system locales were found
server_1 | [nodemon] 2.0.4
server_1 | [nodemon] to restart at any time, enter `rs`
server_1 | [nodemon] watching path(s): *.*
server_1 | [nodemon] watching extensions: js,mjs,json
server_1 | [nodemon] starting `node index.js`
postgres_1 | ok
server_1 | Listening
postgres_1 | syncing data to disk ... ok
postgres_1 |
postgres_1 |
postgres_1 | Success. You can now start the database server using:
postgres_1 |
postgres_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres_1 |
postgres_1 | initdb: warning: enabling "trust" authentication for local connections
postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1 | --auth-local and --auth-host, the next time you run initdb.
postgres_1 | waiting for server to start....2020-08-05 14:24:51.634 UTC [34] LOG: starting PostgreSQL 12.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 9.3.0) 9.3.0, 64-bit
postgres_1 | 2020-08-05 14:24:51.700 UTC [34] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-08-05 14:24:51.981 UTC [35] LOG: database system was shut down at 2020-08-05 14:24:50 UTC
postgres_1 | 2020-08-05 14:24:52.040 UTC [34] LOG: database system is ready to accept connections
postgres_1 | done
postgres_1 | server started
postgres_1 |
postgres_1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres_1 |
postgres_1 | waiting for server to shut down....2020-08-05 14:24:52.121 UTC [34] LOG: received fast shutdown request
postgres_1 | 2020-08-05 14:24:52.186 UTC [34] LOG: aborting any active transactions
postgres_1 | 2020-08-05 14:24:52.188 UTC [34] LOG: background worker "logical replication launcher" (PID 41) exited with exit code 1
postgres_1 | 2020-08-05 14:24:52.188 UTC [36] LOG: shutting down
postgres_1 | 2020-08-05 14:24:52.669 UTC [34] LOG: database system is shut down
postgres_1 | done
postgres_1 | server stopped
postgres_1 |
postgres_1 | PostgreSQL init process complete; ready for start up.
postgres_1 |
postgres_1 | 2020-08-05 14:24:52.832 UTC [1] LOG: starting PostgreSQL 12.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 9.3.0) 9.3.0, 64-bit
postgres_1 | 2020-08-05 14:24:52.832 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-08-05 14:24:52.832 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-08-05 14:24:52.954 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-08-05 14:24:53.136 UTC [43] LOG: database system was shut down at 2020-08-05 14:24:52 UTC
postgres_1 | 2020-08-05 14:24:53.194 UTC [1] LOG: database system is ready to accept connections
Hope this would help manyone.
Adding on ArifMustafa's answer, This worked for me.
postgres:
image: 'postgres:12-alpine'
environment:
POSTGRES_PASSWORD: mypassword
expose:
- 5432
volumes:
- postgres_data:/var/lib/postgres/data/
volumes:
postgres_data:
In my case there was an error about the POSTGRES_PASSWORD (in docker-compose.yml) and the DATABASE settings for PASSWORD in the project_level settings.py file.
After providing a password in docker-compose.yml
db: # For the PostgreSQL database
image: postgres:11
environment:
- POSTGRES_PASSWORD=example
It is necessary to provide the same password to enable access to the POSTGRES database between the two containers (in my case the web application and the database it depended on). In the project_level settings.py:
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'postgres',
'PASSWORD': 'example',
'HOST': 'db',
'PORT': 5432
}
}
To resolve the error when using the command,
docker pull postgres
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
or POSTGRES_HOST_AUTH_METHOD=trust use this which is not recommended.
You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all connections without a password. This is not recommended.
Here comes the solution for this:
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
The default postgres user and database are created in the entrypoint with initdb.
The postgres database is a default database meant for use by users, utilities and third party applications.
postgresql.org/docs

Docker-Compose + Postgres: /docker-entrypoint-initdb.d/init.sql: Permission denied

I have the following docker compose file:
version: "3"
services:
postgres:
image: postgres:11.2-alpine
environment:
POSTGRES_PASSWORD: root
POSTGRES_USER: root
ports:
- "5432:5432"
volumes:
- ./init-db/init-db.sql:/docker-entrypoint-initdb.d/init.sql
This is the init-db.sql:
CREATE TABLE users (
email VARCHAR(355) UNIQUE NOT NULL,
password VARCHAR(256) NOT NULL
);
CREATE TABLE products (
id SERIAL PRIMARY KEY,
title VARCHAR(100) NOT NULL,
price NUMERIC(6, 2) NOT NULL,
category INT NOT NULL
);
INSERT INTO users VALUES ('test#test.com', 'Test*123');
INSERT INTO products (title, price, category) VALUES ('Truco', 9.90, 13);
When I run docker-compose up, I'm getting this error:
server started
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init.sql
/docker-entrypoint-initdb.d/init.sql: Permission denied
I already tried to:
chmod 777 on the sql file
chmod -x on the sql file
Run docker and docker-compose using sudo
Any idea?
I found the easy way to solve this...
You should use "build" way to create postgres service
And DO NOT setting the volume for init.sql, it will cause the permission problem.
postgres:
build: ./postgres
Create a Dockerfile for postgres like this
FROM postgres:12
COPY ./init.sql /docker-entrypoint-initdb.d/init.sql
CMD ["docker-entrypoint.sh", "postgres"]
Then it should works out.
Hope my answer would help you!
For me problem was in my machine. enabled SELinux access control, which did not allow for containers to expand files.
Solution, disable SELinux:
echo SELINUX=disabled > /etc/selinux/config
SELINUXTYPE=targeted >> /etc/selinux/config
setenforce 0
From this
I had a similar problem when using the ADD command.
When using ADD (eg to download a file) the default chmod value is 711. When using the COPY command the chmod will match the hosts chmod of the file where you copy it from. The solution is to set the permission before copying, or change them in your Dockerfile after they've been copied.
It looks like there will finally be a "COPY --chmod 775" flag available in the upcoming docker 20 which will make this easier.
https://github.com/moby/moby/issues/34819
When the sql file in /docker-entrypoint-initdb.d/ has a permission of 775 then the file is run correctly.
You can test this within the image (override entrypoint to /bin/bash) using:
docker-entrypoint.sh postgres
I have same problem on my macOS, but it's OK on my ubuntu laptop.
I found the problem is that MacOS cannot let docker access the folder.
Following link may resolve your problem.
https://stackoverflow.com/a/58482702/10752354
Besides, in my case, I should add one line code in my init.sql file because the default database is "root", and I should change "root" database into "postgres" database, or in your case for another custom database instead of root.
> docker-compose exec db psql -U root
psql (13.0 (Debian 13.0-1.pgdg100+1))
Type "help" for help.
root=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privilege
s
-----------+-------+----------+------------+------------+-----------------
--
postgres | root | UTF8 | en_US.utf8 | en_US.utf8 |
root | root | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | root | UTF8 | en_US.utf8 | en_US.utf8 | =c/root
+
| | | | | root=CTc/root
template1 | root | UTF8 | en_US.utf8 | en_US.utf8 | =c/root
+
| | | | | root=CTc/root
(4 rows)
so I need to add \c postgres in my init.sql file:
\c postgres // for creating table in the database named 'postgres'
create table sample_table. ( ... )
I tried with the following compose file and it seems working as expected. are you sure form the path of the files you use?
version: "3"
services:
postgres:
image: postgres:11.2-alpine
environment:
POSTGRES_PASSWORD: root
POSTGRES_USER: root
ports:
- "5432:5432"
volumes:
- ./init-db.sql:/docker-entrypoint-initdb.d/init.sql
files structure
drwxr-xr-x 4 shihadeh 502596769 128B Feb 28 22:37 .
drwxr-xr-x 12 shihadeh 502596769 384B Feb 28 22:36 ..
-rw-r--r-- 1 shihadeh 502596769 244B Feb 28 22:37 docker-compose.yml
-rw-r--r-- 1 shihadeh 502596769 380B Feb 28 22:37 init-db.sql
output of docker-compose up
postgres_1 | The files belonging to this database system will be owned by user "postgres".
postgres_1 | This user must also own the server process.
postgres_1 |
postgres_1 | The database cluster will be initialized with locale "en_US.utf8".
postgres_1 | The default database encoding has accordingly been set to "UTF8".
postgres_1 | The default text search configuration will be set to "english".
postgres_1 |
postgres_1 | Data page checksums are disabled.
postgres_1 |
postgres_1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1 | creating subdirectories ... ok
postgres_1 | selecting default max_connections ... 100
postgres_1 | selecting default shared_buffers ... 128MB
postgres_1 | selecting dynamic shared memory implementation ... posix
postgres_1 | creating configuration files ... ok
postgres_1 | running bootstrap script ... ok
postgres_1 | performing post-bootstrap initialization ... sh: locale: not found
postgres_1 | 2020-02-28 21:45:01.363 UTC [26] WARNING: no usable system locales were found
postgres_1 | ok
postgres_1 | syncing data to disk ... ok
postgres_1 |
postgres_1 | Success. You can now start the database server using:
postgres_1 |
postgres_1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres_1 |
postgres_1 |
postgres_1 | WARNING: enabling "trust" authentication for local connections
postgres_1 | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1 | --auth-local and --auth-host, the next time you run initdb.
postgres_1 | waiting for server to start....2020-02-28 21:45:02.272 UTC [30] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-02-28 21:45:02.294 UTC [31] LOG: database system was shut down at 2020-02-28 21:45:01 UTC
postgres_1 | 2020-02-28 21:45:02.299 UTC [30] LOG: database system is ready to accept connections
postgres_1 | done
postgres_1 | server started
postgres_1 | CREATE DATABASE
postgres_1 |
postgres_1 |
postgres_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init.sql
postgres_1 | CREATE TABLE
postgres_1 | CREATE TABLE
postgres_1 | INSERT 0 1
postgres_1 | INSERT 0 1
postgres_1 |
postgres_1 |
postgres_1 | waiting for server to shut down....2020-02-28 21:45:02.776 UTC [30] LOG: received fast shutdown request
postgres_1 | 2020-02-28 21:45:02.779 UTC [30] LOG: aborting any active transactions
postgres_1 | 2020-02-28 21:45:02.781 UTC [30] LOG: background worker "logical replication launcher" (PID 37) exited with exit code 1
postgres_1 | 2020-02-28 21:45:02.781 UTC [32] LOG: shutting down
postgres_1 | 2020-02-28 21:45:02.826 UTC [30] LOG: database system is shut down
postgres_1 | done
postgres_1 | server stopped
postgres_1 |
postgres_1 | PostgreSQL init process complete; ready for start up.
postgres_1 |
postgres_1 | 2020-02-28 21:45:02.890 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-02-28 21:45:02.890 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-02-28 21:45:02.895 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-02-28 21:45:02.915 UTC [43] LOG: database system was shut down at 2020-02-28 21:45:02 UTC
postgres_1 | 2020-02-28 21:45:02.921 UTC [1] LOG: database system is ready to accept connections
Just adding my solution even when it is a little late:
My problem:
I am installing the source files from an udemy course:
https://github.com/rockthejvm/spark-essentials
We have 2 important things here:
Docker compose file: docker-compose.yml
Folder and sql file to do db initialization: ./sql/bd.sql
I downloaded the zip file to my Ubuntu 20.04
When doing docker-compose up, I got the error:
Attaching to postgres
postgres | ls: cannot open directory '/docker-entrypoint-initdb.d/': Permission denied
postgres exited with code 2
SOLUTION
Before running doccker-compose up do:
run chmod 777 ON FOLDER called sql that contains file db.sql
chmod 777 sql
Explanation:
The folder called sql contains file db.sql to init the database from inside the docker container so, we need to give the permissions to run the file inside folde called sql.
I think you only need to do as below.
Give permisions to your folder:
chmod 777 init-db
be sure your sql file can be read by all users:
chmod 666 init-db.sql