How can I update the max_connections config in my circleCI configuration? - postgresql

I'm struggling to configure the MAX_CONNECTIONS postgres config in my circleCI configuration file. As you can see below I tried using sed to replace the max_connections value, but this didn't do anything, the max_connections remained at the default 100. I also tried to run a custom command (see the commented command: | block below), but this threw the following error and stopped the circleCI process: /docker-entrypoint.sh: line 100: exec: docker: not found Exited with code 127
version: 2
jobs:
test:
pre:
- sudo sed -i 's/max_connections = 100/max_connections = 300/g' /etc/postgresql/9.6/main/postgresql.conf # Allow more than 100 connections to DB
- sudo service postgresql restart
docker:
# Specify the version you desire here
- image: circleci/node:8.11
# Setup postgres and configure the db
- image: hegand/postgres-postgis
# command: |
# docker run --name hegand/postgres-postgis -e POSTGRES_PORT=$POSTGRES_PORT POSTGRES_PASSWORD=$POSTGRES_PASSWORD POSTGRES_DB=$POSTGRES_DB -d postgres -N 300
environment:
POSTGRES_USER: user
POSTGRES_DB: table
POSTGRES_PASSWORD: ""
POSTGRES_PORT: 5432

Specifying the command within -image is the right way to go.
You just have to use the command to start the docker container but the one that is run inside the container, i.e. instead of the value of CMD in the dockerfile.
I think that the following should work:
- image: hegand/postgres-postgis
command: postgres -c max_connections=300
Have a look at the CircleCI config reference: https://circleci.com/docs/2.0/configuration-reference/#docker

Following Adrian Mouat's answer, you can pass all config changes in command option in docker-compose file, it is is so clean & simple to generate for different environments.
services:
postgres:
...
image: postgres:11.5
command:
- "postgres"
- "-c"
- "max_connections=1000"
- "-c"
- "shared_buffers=3GB"
- "-c"
...

Related

problem with postgres docker container inside Gitlab CI

It's been few days I am blocked on this problem with my project, it's working on localhost but not on gitlabCI.
I would like to build a test database on the postgres docker image in gitlabCI but it doesn't work, I have try a lot of things and lose a lot of hours before ask this there :'(.
below my docker-compose.yml file :
version: "3"
services:
nginx:
image: nginx:latest
container_name: nginx
depends_on:
- postgres
- monapp
volumes:
- ./nginx-conf:/etc/nginx/conf.d
- ./util/certificates/certs:/etc/nginx/certs/localhost.crt
- ./util/certificates/private:/etc/nginx/certs/localhost.key
ports:
- 81:80
- 444:443
networks:
- monreseau
monapp:
image: monimage
container_name: monapp
depends_on:
- postgres
ports:
- "3000:3000"
networks:
- monreseau
command: "npm run local"
postgres:
image: postgres:9.6
container_name: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_HOST: postgres
POSTGRES_PASSWORD: postgres
volumes:
- ./pgDatas:/var/lib/postgresql/data/
- ./db_dumps:/home/dumps/
ports:
- "5432:5432"
networks:
- monreseau
networks:
monreseau:
and below my gitlab-ci.yml file:
stages:
# - build
- test
image:
name: docker/compose:latest
services:
- docker:dind
before_script:
- docker version
- docker-compose version
variables:
DOCKER_HOST: tcp://docker:2375/
# build:
# stage: build
# script:
# - docker build -t monimage .
# - docker-compose up -d
test:
stage: test
script :
- docker build -t monimage .
- docker-compose up -d
- docker ps
- docker exec -i postgres psql -U postgres -h postgres -f /home/dumps/test/dump_test_001 -c \\q
- exit
- docker exec -i monapp ./node_modules/.bin/env-cmd -f ./env/.env.builded-test npx jasmine spec/auth_queries.spec.js
- exit
this is the content of docker ps log on gitlabCI server :
docker ps on gitlab-CI
I thought to put postgres on host would work, but no I always have in gitlab-ci terminal:
psql: could not connect to server: Connection refused
Is the server running on host "postgres" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
I also tried to put docker on host but error :
psql: could not translate host name "docker" to address: Name or service not known
little precision : it is working on localhost of my computer when i am doing make builded-test
bellow my makefile:
builded-test:
docker build -t monimage .
docker-compose up -d
docker ps
docker exec -i postgres psql -U postgres -h postgres -f /home/dumps/test/dump_test_001 -c \\q
exit
docker exec -i monapp ./node_modules/.bin/env-cmd -f ./env/.env.builded-test npx jasmine spec/auth_queries.spec.js
exit
docker-compose down
I want to make work postgres image in my docker-compose on gitlab CI to execute my tests help me please :) thanks by advance
UPDATE
Now it working in gitlab-runner but still not on gitlab when I push, I update the files like following
I added :
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_HOST_AUTH_METHOD: trust
and changed
test:
stage: test
script :
- docker build -t monimage .
- docker-compose up -d
- docker ps
- docker exec postgres psql -U postgres **-h postgres** -f /home/dumps/test/dump_test_001
- docker exec monapp ./node_modules/.bin/env-cmd -f ./env/.env.builded-test npx jasmine spec/auth_queries.spec.js
in the .gitlab-ci.yml
but still don't work when I push it to gitlab, it give me :
sql: could not connect to server: Connection refused
Is the server running on host "postgres" (172.19.0.2) and accepting
TCP/IP connections on port 5432?
any ideas ? :)
Maybe you need to wait for PostgreSQL service to be up and running.
Can you add a 10 seconds delay before trying the psql stuff? Something like:
- sleep 10
If it works, then you can use a more specific solution to wait for PostgreSQL to be initialized, like Docker wait for postgresql to be running

How to fix error "Error: Database is uninitialized and superuser password is not specified."

Hello i get this error after i run docker-compose build up
But i get this error
postgres_1 | Error: Database is uninitialized and superuser password is not specified.
Here is a snap shot of the error!
And down below is my docker-compose.yml file
version: '3.6'
Server.js file
services:
smart-brain-api:
container_name: backend
build: ./
command: npm start
working_dir: /usr/src/smart-brain-api
ports:
- "3000:3000"
volumes:
- ./:/usr/src/smart-brain-api
#PostGres Database
postgres:
image: postgres
ports:
- "5432:5432"
You can use the POSTGRES_HOST_AUTH_METHOD environment property by making the following change to your docker-compose.yml.
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: "db"
POSTGRES_HOST_AUTH_METHOD: "trust"
The above will solve the error.
To avoid that you can specify the followings environments variables for postgres container on your docker-compose file.
POSTGRES_PASSWORD
This environment variable is normally required for you to use the PostgreSQL image. This environment variable sets the superuser password for PostgreSQL. The default superuser is defined by the POSTGRES_USER environment variable.
POSTGRES_DB
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.
For more information about Environment Variables check:
https://hub.docker.com/_/postgres
It's already mentioned in the interactive mode; how to run the container, if you don't find it, use the following:
To allow all connections without a password use:
docker run -e POSTGRES_HOST_AUTH_METHOD=trust postgres:9.6 (use the tag you need).
To specify postgres password for the superuser, use:
docker run -e POSTGRES_PASSWORD=<your_password> postgres:9.6 (use the tag you need).
You can make change to your docker-compose.yml file like in example:
db:
image: postgres:13
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
Summing up the command on official docker site:
docker run --name <YOUR_POSTGRES_DB> -e POSTGRES_PASSWORD=<YOUR_POSTGRES_PASSWORD> -d postgres
You can make your connection using the below docker command.
docker run -e POSTGRES_PASSWORD=<your_password> postgres:9.6.

Enable logging in postgresql using docker-compose

I am using Postgres as a service in my docker-compose file. I want logging to log file to be enabled when I do docker-compose up. One way to enable logging is by editing postgres.conf file but it's not useful in this case. One other way is to do something like this
docker run --name postgresql -itd --restart always sameersbn/postgresql:10-2 -c logging_collector=on
but this isn't useful too cause I am not starting it from an image but as a docker-compose service. Any idea how I can start the docker-compose up with logging enabled in Postgres???
Here is the docker-compose to run the command -c in compose
version: '3.6'
services:
postgresql:
image: postgres:11.5
container_name: platops_postgres
volumes: ['platops-data:/var/lib/postgresql/data/', 'postgress-logs:/var/log/postgresql/']
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
environment:
- POSTGRES_USER=postgresql
- POSTGRES_PASSWORD=postgresql
ports: ['5432:5432']
volumes:
platops-data: {}
# uncomment and set the path of the folder to maintain persistancy
# data-postgresql:
# driver: local
# driver_opts:
# o: bind
# type: none
# device: /path/of/db/postgres/data/
postgress-logs: {}
# uncomment and set the path of the folder to maintain persistancy
# data-postgresql:
# driver: local
# driver_opts:
# o: bind
# type: none
# device: /path/of/db/postgres/logs/
For more information, you can check with the containers/postgress
Just like you command with docker run:
docker run --name postgresql -itd --restart always sameersbn/postgresql:10-2 -c logging_collector=on
that you add the -c logging_collector=on arguments for the ENTRYPOINT ["/sbin/entrypoint.sh"] to enable logging. (Dockerfile).
In docker-compose.yml file, use command: like this:
version: "3.7"
services:
database:
image: sameersbn/postgresql:10-2
command: "-c logging_collector=on"
# ......
When Postgresql contaienr run, it will run command: /sbin/entrypoint.sh -c logging_collector=on.

Add custom config location to Docker Postgres image preserving its access parameters

I have written a Dockerfile like this:
FROM postgres:11.2-alpine
ADD ./db/postgresql.conf /etc/postgresql/postgresql.conf
CMD ["-c", "config_file=/etc/postgresql/postgresql.conf"]
It just adds custom config location to a generic Postgres image.
Now I have the following docker-compose service description
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
POSTGRES_PASSWORD passwordhere
POSTGRES_USER: user
POSTGRES_DB: db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
The problem is I can no longer remotely connect to DB using these credentials if I add this Config option. Without that CMD line it works just fine.
If I prepend "postgres" in CMD it has the same effect due to the underlying script prepending it itself.
Provided all the files are where they need to be, I believe the only problem with your setup is that you've omitted an actual executable from the CMD -- specifying just the option. You need to actually run postgres:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]
That should work!
EDIT in response to OP's first comment below
First, I did confirm that behavior doesn't change whether "postgres" is in the CMD or not. It's exactly as you said. Onward!
Then I thought there must be a problem with the particular postgresql.conf in use. If we could just figure out what the default file is.. turns out we can!
How to get the existing postgres.conf out of the postgres image
1. Create docker-compose.yml with the following contents:
version: "3"
services:
db:
image: postgres:11.2-alpine
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
2. Spin up the service using
$ docker-compose run --rm --name=postgres db
3. In another terminal get the location of the file used in this release:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SHOW config_file"
config_file
------------------------------------------
/var/lib/postgresql/data/postgresql.conf
(1 row)
4. View the contents of default postgresql.conf
$ docker exec -it postgres cat /var/lib/postgresql/data/postgresql.conf
5. Replace local config file
Now all we have to do is replace the local config file ./db/postgresql.conf with the contents of the known-working-state config and modify it as necessary.
Database objects are only created once!
Database objects are only created once by the postgres container (source). So when developing the database parameters we have to remove them to make sure we're in a clean state.
Here's a nuclear (be careful!) option to
(1) remove all exited Docker containers, and then
(2) remove all Docker volumes not attached to containers:
$ docker rm $(docker ps -a -q) -f && docker volume prune -f
So now we can be sure to start from a clean state!
Final setup
Let's bring our Dockerfile back into the picture (just like you have in the question).
docker-compose.yml
version: "3"
services:
db:
build:
context: .
dockerfile: ./db/Dockerfile
environment:
- POSTGRES_PASSWORD=passwordhere
- POSTGRES_USER=user
- POSTGRES_DB=db_name
ports:
- 5432:5432
volumes:
- ./run/db-data:/var/lib/db/data
Connect to the db
Now all we have to do is build from a clean state.
# ensure all volumes are deleted (see above)
$ docker-compose build
$ docker-compose run --rm --name=postgres db
We can now (still) connect to the database:
$ docker exec -it postgres psql --dbname=db_name --username=user --command="SELECT COUNT(1) FROM pg_database WHERE datname='db_name'"
Finally, we can edit the postgres.conf from a known working state.
As per this other discussion, your CMD command only has arguments and is missing a command. Try:
CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]

Docker wait for postgresql to be running

I am using postgresql with django in my project. I've got them in different containers and the problem is that i need to wait for postgres before running django. At this time i am doing it with sleep 5 in command.sh file for django container. I also found that netcat can do the trick but I would prefer way without additional packages. curl and wget can't do this because they do not support postgres protocol.
Is there a way to do it?
I've spent some hours investigating this problem and I got a solution.
Docker depends_on just consider service startup to run another service. Than it happens because as soon as db is started, service-app tries to connect to ur db, but it's not ready to receive connections. So you can check db health status in app service to wait for connection. Here is my solution, it solved my problem. :)
Important: I'm using docker-compose version 2.1.
version: '2.1'
services:
my-app:
build: .
command: su -c "python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
depends_on:
db:
condition: service_healthy
links:
- db
volumes:
- .:/app_directory
db:
image: postgres:10.5
ports:
- "5432:5432"
volumes:
- database:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
database:
In this case it's not necessary to create a .sh file.
This will successfully wait for Postgres to start. (Specifically line 6). Just replace npm start with whatever command you'd like to happen after Postgres has started.
services:
practice_docker:
image: dockerhubusername/practice_docker
ports:
- 80:3000
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:password#db:5432/practicedocker
- PORT=3000
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=practicedocker
If you have psql you could simply add the following code to your .sh file:
RETRIES=5
until psql -h $PG_HOST -U $PG_USER -d $PG_DATABASE -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES--)) remaining attempts..."
sleep 1
done
The simplest solution is a short bash script:
while ! nc -z HOST PORT; do sleep 1; done;
./run-smth-else;
Problem with your solution tiziano is that curl is not installed by default and i wanted to avoid installing additional stuff. Anyway i did what bereal said. Here is the script if anyone would need it.
import socket
import time
import os
port = int(os.environ["DB_PORT"]) # 5432
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while True:
try:
s.connect(('myproject-db', port))
s.close()
break
except socket.error as ex:
time.sleep(0.1)
In your Dockerfile add wait and change your start command to use it:
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
CMD /wait && npm start
Then, in your docker-compose.yml add a WAIT_HOSTS environment variable for your api service:
services:
api:
depends_on:
- postgres
environment:
- WAIT_HOSTS: postgres:5432
postgres:
image: postgres
ports:
- "5432:5432"
This has the advantage that it supports waiting for multiple services:
environment:
- WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
For more details, please read their documentation.
wait-for-it small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
can be cloned in Dockerfile by below command
RUN git clone https://github.com/vishnubob/wait-for-it.git
docker-compose.yml
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it/wait-for-it.sh", "db:5432", "--", "npm", "start"]
db:
image: postgres
Why not curl?
Something like this:
while ! curl http://$POSTGRES_PORT_5432_TCP_ADDR:$POSTGRES_PORT_5432_TCP_PORT/ 2>&1 | grep '52'
do
sleep 1
done
It works for me.
I have managed to solve my issue by adding health check to docker-compose definition.
db:
image: postgres:latest
ports:
- 5432:5432
healthcheck:
test: "pg_isready --username=postgres && psql --username=postgres --list"
timeout: 10s
retries: 20
then in the dependent service you can check the health status:
my-service:
image: myApp:latest
depends_on:
kafka:
condition: service_started
db:
condition: service_healthy
source: https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
If the backend application itself has a PostgreSQL client, you can use the pg_isready command in an until loop. For example, suppose we have the following project directory structure,
.
├── backend
│   └── Dockerfile
└── docker-compose.yml
with a docker-compose.yml
version: "3"
services:
postgres:
image: postgres
backend:
build: ./backend
and a backend/Dockerfile
FROM alpine
RUN apk update && apk add postgresql-client
CMD until pg_isready --username=postgres --host=postgres; do sleep 1; done \
&& psql --username=postgres --host=postgres --list
where the 'actual' command is just a psql --list for illustration. Then running docker-compose build and docker-compose up will give you the following output:
Note how the result of the psql --list command only appears after pg_isready logs postgres:5432 - accepting connections as desired.
By contrast, I have found that the nc -z approach does not work consistently. For example, if I replace the backend/Dockerfile with
FROM alpine
RUN apk update && apk add postgresql-client
CMD until nc -z postgres 5432; do echo "Waiting for Postgres..." && sleep 1; done \
&& psql --username=postgres --host=postgres --list
then docker-compose build followed by docker-compose up gives me the following result:
That is, the psql command throws a FATAL error that the database system is starting up.
In short, using an until pg_isready loop (as also recommended here) is the preferable approach IMO.
There are couple of solutions as other answers mentioned.
But don't make it complicated, just let it fail-fast combined with restart: on-failure. Your service will open connection to the db and may fail at the first time. Just let it fail. Docker will restart your service until it green. Keep your service simple and business-focused.
version: '3.7'
services:
postgresdb:
hostname: postgresdb
image: postgres:12.2
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=Ceo
migrate:
image: hanh/migration
links:
- postgresdb
environment:
- DATA_SOURCE=postgres://user:secret#postgresdb:5432/Ceo
command: migrate sql --yes
restart: on-failure # will restart until it's success
Check out restart policies.
None of other solution worked, except for the following:
version : '3.8'
services :
postgres :
image : postgres:latest
environment :
- POSTGRES_DB=mydbname
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
healthcheck :
test: [ "CMD", "pg_isready", "-q", "-d", "mydbname", "-U", "myusername" ]
interval : 5s
timeout : 5s
retries : 5
otherservice:
image: otherserviceimage
depends_on :
postgres:
condition: service_healthy
Thanks to this thread: https://github.com/peter-evans/docker-compose-healthcheck/issues/16
Sleeping until pg_isready returns true unfortunately is not always reliable. If your postgres container has at least one initdb script specified, postgres restarts after it is started during it's bootstrap procedure, and so it might not be ready yet even though pg_isready already returned true.
What you can do instead, is to wait until docker logs for that instance return a PostgreSQL init process complete; ready for start up. string, and only then proceed with the pg_isready check.
Example:
start_postgres() {
docker-compose up -d --no-recreate postgres
}
wait_for_postgres() {
until docker-compose logs | grep -q "PostgreSQL init process complete; ready for start up." \
&& docker-compose exec -T postgres sh -c "PGPASSWORD=\$POSTGRES_PASSWORD PGUSER=\$POSTGRES_USER pg_isready --dbname=\$POSTGRES_DB" > /dev/null 2>&1; do
printf "\rWaiting for postgres container to be available ... "
sleep 1
done
printf "\rWaiting for postgres container to be available ... done\n"
}
start_postgres
wait_for_postgres
You can use the manage.py command "check" to check if the database is available (and wait 2 seconds if not, and check again).
For instance, if you do this in your command.sh file before running the migration, Django has a valid DB connection while running the migration command:
...
echo "Waiting for db.."
python manage.py check --database default > /dev/null 2> /dev/null
until [ $? -eq 0 ];
do
sleep 2
python manage.py check --database default > /dev/null 2> /dev/null
done
echo "Connected."
# Migrate the last database changes
python manage.py migrate
...
PS: I'm not a shell expert, please suggest improvements.
#!/bin/sh
POSTGRES_VERSION=9.6.11
CONTAINER_NAME=my-postgres-container
# start the postgres container
docker run --rm \
--name $CONTAINER_NAME \
-e POSTGRES_PASSWORD=docker \
-d \
-p 5432:5432 \
postgres:$POSTGRES_VERSION
# wait until postgres is ready to accept connections
until docker run \
--rm \
--link $CONTAINER_NAME:pg \
postgres:$POSTGRES_VERSION pg_isready \
-U postgres \
-h pg; do sleep 1; done
An example for Nodejs and Postgres api.
#!/bin/bash
#entrypoint.dev.sh
echo "Waiting for postgres to get up and running..."
while ! nc -z postgres_container 5432; do
# where the postgres_container is the hos, in my case, it is a Docker container.
# You can use localhost for example in case your database is running locally.
echo "waiting for postgress listening..."
sleep 0.1
done
echo "PostgreSQL started"
yarn db:migrate
yarn dev
# Dockerfile
FROM node:12.16.2-alpine
ENV NODE_ENV="development"
RUN mkdir -p /app
WORKDIR /app
COPY ./package.json ./yarn.lock ./
RUN yarn install
COPY . .
CMD ["/bin/sh", "./entrypoint.dev.sh"]
If you want to run it with a single line command. You can just connect to the container and check if postgres is running
docker exec -it $DB_NAME bash -c "\
until psql -h $HOST -U $USER -d $DB_NAME-c 'select 1'>/dev/null 2>&1;\
do\
echo 'Waiting for postgres server....';\
sleep 1;\
done;\
exit;\
"
echo "DB Connected !!"
Inspired by #tiziano answer and the lack of nc or pg_isready, it seems that in a recent docker python image (python:3.9 here) that curl is installed by default and I have the following check running in my entrypoint.sh:
postgres_ready() {
$(which curl) http://$DBHOST:$DBPORT/ 2>&1 | grep '52'
}
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available.'
Trying with a lot of methods, Dockerfile, docker compose yaml, bash script. Only last of method help me: with makefile.
docker-compose up --build -d postgres
sleep 2
docker-compose up --build -d app