Running psql command from a docker container - postgresql

I'm having trouble connecting postgresDB to my app using docker-compose
My understanding of docker compose is that I can combine two containers so that they can communicate with each other. Suppose I have an app in the appcontainer that runs the psql command (just a one liner python script with os.command("psql")). Since the app container does not have postgres installed, it won't be able to run psql by itself. However, I thought combining two containers in docker-compose.yml would let me run psql but apparently not.
What am I missing here?
I am using 2 postgres images because I'm trying to find regression bugs between two dbms
version: "3"
services:
app:
image: "app:1.0"
depends_on:
- postgres9
- postgres12
ports:
- 8080:80
postgres9:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: mysecretpassword
POSTGRES_USER: postgres
POSTGRES_DB: test_bd
ports:
- '5432:5432'
postgres12:
image: postgres:12
environment:
POSTGRES_PASSWORD: mysecretpassword
POSTGRES_USER: postgres
POSTGRES_DB: test_bd
ports:
- '5435:5435'

Each Docker container has a self-contained filesystem. You can never directly run commands from the host or from other containers' filesystems; anything you want to run needs to be installed in the container (really, in its image's Dockerfile).
If you want to run a tool like psql, it needs to be installed in your image. You don't say what your base image is, but if it's based on Debian or Ubuntu, you need to install the postgresql-client package:
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
postgresql-client
The right approach here is to add a standard Python PostgreSQL client library, like psycopg2, to your project's Python Pipfile, setup.py, and/or requirements.txt, and use that library instead of shelling out to psql. You will also need the PostgreSQL C library header files to install that package; instead of postgresql-client, install the Debian libpq-dev package.

In your case, those two containers with postgres instance in each, are running on different hosts (other than host with app). What you need is to specify correct host in psql command. It might look like (for postgres12 container):
PGPASSWORD="mysecretpassword" psql -h postgres12 -d test_bd -U postgres

Related

How to Run PostgreSQL by using Docker

Goal:
Run Postgres in docker by pulling postgres from docker hub (https://hub.docker.com/_/postgres)
Background:
I get a message when I tried running docker with postgres
Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
I got info at https://andrew.hawker.io/dailies/2020/02/25/postgres-uninitialized-error/ about why.
Problem:
"Update your docker-compose.yml or corresponding configuration with the POSTGRES_HOST_AUTH_METHOD environment variable to revert back to previous behavior or implement a proper password." (https://andrew.hawker.io/dailies/2020/02/25/postgres-uninitialized-error/)
I don't understand the solution about how to solve the current situation.
Where can i find the dokcer-compose.yml?
Info:
*I'm newbie in PostGre and Docker
If you need to run PostgreSQL in docker you will have to use a variable in docker run command like this :
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
The error is telling you the same.
Read more at https://hub.docker.com/_/postgres
Docker-compose.yml is just another option. You can run it just by docker run like in first answer. If you want use docker-compose, in documentation is example of it:
stack.yaml
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 8080:8080
and run: docker-compose -f stack.yml up.
Everything is here:
https://hub.docker.com/_/postgres

not able to access Postgres container in docker compose

I have a docker-compose.yml with two services, a custom image (the service called code) and a Postgres server. Below I attach the Dockerfile to built the image called app of the first service and next the docker-compose.yml:
# Dockerfile of custom image
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
WORKDIR /usr/app
COPY ./* ${PWD}/
ADD ./requirements.txt ./
RUN pip install -r requirements.txt
ADD ./ ./
# docker-compose.yml
version: '3.2'
services:
code:
image: app:latest
ports:
- 5001:5001
networks:
- docker-elk_elk
postgres:
image: postgres:9.5-alpine
environment:
POSTGRES_USER: postgres # define credentials
POSTGRES_PASSWORD: postgres # define credentials
POSTGRES_DB: postgres # define database
ports:
- 5432:5432 # Postgres port
networks:
- docker-elk_elk
networks:
docker-elk_elk:
external: true
Also here docker-elk_elk points to another network where another docker-compose stack runs and I want the above docker-compose to join too. However when I run docker-compose run code bash and obtain a shell in code service, a curl https://postgres:5432 gives the following message : curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to postgres:5432. I've tried also curl http://postgres:5432 which returned curl: (52) Empty reply from server. Furthermore, the docker-elk_elk network (clearly created by elasticsearch-logtash-kibaba stack) when doing docker network ls gives
NETWORK ID NAME DRIVER SCOPE
8a54fe394fe8 docker-elk_elk bridge local
I'm really lost and confused, can someone help me out? If there is any piece of info that might be necessary or helpful and wasn't included above let me know please.
I forgot to mention that app is just a simple python application (not a web app or other python sophisticated libraries).
P.S. Something that perhaps I should have mentioned above. What I want to do, is using the ubuntu container with the app inside to query (and send data) both to the postgres and Elasticsearch (which is in the other docker-compose stack) db.

Postgresql version in docker container is not compatible with its data

Mac OS 10.13.6
docker 9.03.5, build 633a0ea
I create two docker containers, web and db using docker-compose.yml. It worked for several months. Recently I decided to rebuild containers from scratch, so I actually removed the existing ones and started over:
$ docker-compose -f docker-compose-dev.yml build --no-cache
$ docker-compose -f docker-compose-dev.yml up -d
Duiring the "build" run, building db container yields this:
DETAIL: The data directory was initialised by PostgreSQL version 9.6,
which is not compatible with this version 11.2.
exited with code 1
The db container does not start so I can not check what it's got inside.
My containers
are defined like this:
version: '3'
services:
web:
restart: unless-stopped
container_name: web
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "8000:8000"
environment:
DJANGO_SETTINGS_MODULE: '<my_app>.settings.postgres'
DB_NAME: 'my_db'
DB_USER: 'my_db_user'
DB_PASS: 'my_db_user'
DB_HOST: 'my_db_host'
PRODUCTION: 'false'
DEBUG: 'True'
depends_on:
- db
volumes:
- ./:/usr/src/app/
db:
image: postgres:11.2-alpine
volumes:
- myapp-db-dev:/var/lib/postgresql/data
environment:
- POSTGRES_DB=<my_db>
- POSTGRES_USER=<my_db_user>
- POSTGRES_PASSWORD=<my_db_password>
volumes:
myapp-db-dev:
My local postgresql is 11.3 (which should be irrelevant):
$ psql --version
psql (PostgreSQL) 11.3
and my local postgresql data directory was removed completely
$ rm -rf /usr/local/var/postgres
However, it's up-to-date:
$ brew postgresql-upgrade-database
Error: postgresql data already upgraded!
I read Stack Overflow 17822974 and Stack Overflow 19076980, those advices did not help.
How to fix this data incompatibility? If possible, I would like to avoid downgrading postgres. I don't even get what data it's talking about at that point, all the data is migrated later in a separate step.
It seems like on the first run Postgres 9.6 was specified as an image. So, the container was initialized and the data was put to the myapp-db-dev named volume. Then someone changed the version and you've got the error. The possible solution would be:
Temporary downgrade the version to the Postgres 9.6, e.g. specify postgres:9.6.
Go to the container and dump the data with pg_dump utility.
Change version to 11.2 and specify new volume (it's a good advice to use host volume).
Restore the data.

Why is postgres container ignoring /docker-entrypoint-initdb.d/* in Gitlab CI

Gitlab CI keeps ignoring the sql-files in /docker-entrypoint-initdb.d/* in this project.
here is docker-compose.yml:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
here is .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: debian:stable-slim
script:
- bash ./deploy.sh
The deployment script basically uses rsync to deploy the content of the repository to to the server via SSH:
rsync -rav --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw -e "ssh -l gitlab-ci" --exclude=".git" --delete ./ "gitlab-ci#$DEPLOY_SERVER:test/"
and then ssh's into the server to stop and restart the container:
ssh "gitlab-ci#$DEPLOY_SERVER" "cd test && docker-compose down && docker-compose up --build --detach"
This all goes well, but when the container starts up, it is supposed to run all the files that are in /docker-entrypoint-initdb.d/* as we can see here.
But instead, when doing docker logs -f lbsn-testdb on the server, I can see it stating
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
and I have no clue, why that happens. When running this container locally or even when I ssh to that server, clone the repo and bring up the containers manually, it all goes well and parses the sql-files. Just not when the Gitlab CI does it.
Any ideas on why that is?
This has been easier than I expected, and fatally nothing to do with Gitlab CI but with file permissions.
I passed --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw to rsync which looked really secure because only the user can do stuff. I confess that I propably copypasted it from somewhere on the internet. But then the files are mounted to the Docker container, and in there they have those permissions as well:
-rw------- 1 1005 1004 314 May 8 15:48 100-create-database.sql
On the host my gitlab-ci user owns those files, they are obviously also owned by some user with ID 1005 in the container as well, and no permissions are given to other users than this one.
Inside the container the user who does things is postgres though, but it can't read those files. Instead of complaining about that, it just ignores them. That might be something to create an issue about…
Now that I pass --chmod=D755,F644 it looks like that:
-rw-r--r-- 1 1005 1004 314 May 8 15:48 100-create-database.sql
and the docker logs say
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/100-create-database.sql
Too easy to think of in the first place :-/
If you already run the postgres service before, the init files will be ignored when you restart it so try to use --build to build the image again
docker-compose up --build -d
and before you run again :
Check the existing volumes with
docker volume ls
Then remove the one that you are using for you pg service with
docker volume rm {volume_name}
-> Make sure that the volume is not used by a container, if so then remove the container as well
I found this topic discovering a similar problem with PostgreSQL installation using the docker-compose tool.
The solution is basically the same. For the provided configuration:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
Your deployment script should set 0755 permissions to your postgres container volume, like chmod -R 0755 ./testdb in this case. It is important to make all subdirectories visible, so chmod -R option is required.
Official Postgres image is running under internal postgres user with the UID 70. Your application user in the host is most likely has different UID like 1000 or something similar. That is the reason for postgres init script to miss installation steps due to permissions error. This issue appears several years, but still exist in the latest PostgreSQL version (currently is 12.1)
Please be aware of security vulnerability when having readable for all init files in the system. It is good to use shell environment variables to pass secrets into the init scrip.
Here is a docker-compose example:
postgres:
image: postgres:12.1-alpine
container_name: app-postgres
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- APP_POSTGRES_DB
- APP_POSTGRES_SCHEMA
- APP_POSTGRES_USER
- APP_POSTGRES_PASSWORD
ports:
- '5432:5432'
volumes:
- $HOME/app/conf/postgres:/docker-entrypoint-initdb.d
- $HOME/data/postgres:/var/lib/postgresql/data
Corresponding script create-users.sh for creating users may looks like:
#!/bin/bash
set -o nounset
set -o errexit
set -o pipefail
POSTGRES_USER="${POSTGRES_USER:-postgres}"
POSTGRES_PASSWORD="${POSTGRES_PASSWORD}"
APP_POSTGRES_DB="${APP_POSTGRES_DB:-app}"
APP_POSTGRES_SCHEMA="${APP_POSTGRES_SCHEMA:-app}"
APP_POSTGRES_USER="${APP_POSTGRES_USER:-appuser}"
APP_POSTGRES_PASSWORD="${APP_POSTGRES_PASSWORD:-app}"
DATABASE="${APP_POSTGRES_DB}"
# Create single database.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE DATABASE ${DATABASE}"
# Create app user.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE USER ${APP_POSTGRES_USER} SUPERUSER PASSWORD '${APP_POSTGRES_PASSWORD}'"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "GRANT ALL PRIVILEGES ON DATABASE ${DATABASE} TO ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname "${DATABASE}" --command "CREATE SCHEMA ${APP_POSTGRES_SCHEMA} AUTHORIZATION ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "ALTER USER ${APP_POSTGRES_USER} SET search_path = ${APP_POSTGRES_SCHEMA},public"

Networking using docker-compose in docker executor in circleci

this is a circleci question I guess.
I am quite happy with circleci but now I ran into a problem and I don't know what I'm doing wrong.
Maybe this is something very easy, but I don't see the it.
In short
I can't make containers talk to each other on circleci.
Problem
Basically what I wanted to do is start a server container and a client container, and then let them talk to each other.
I created a minimal example here: https://github.com/mRcSchwering/circleci-integration-test
The README.md basically explains the desired outcome.
I have a .circleci/config.yml like this:
version: 2
jobs:
build:
docker:
- image: docker:18.03.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install docker-compose
command: |
apk --update add py2-pip
/usr/bin/pip2 install docker-compose
docker-compose --version
- run:
name: Start Container
command: |
docker-compose up -d
docker-compose ps
- run:
name: Let client talk to server
command: |
docker-compose run client psql -h server -p 5432 -U postgres -c "\l"
In a docker container, docker-compose is installed, which is then used to start a server and a client (postgres here). In the last step I am telling the client to query the server. However, it cannot find the server:
#!/bin/sh -eo pipefail
docker-compose run client psql -h server -p 5432 -U postgres -c "\l"
Starting project_server_1 ...
^#^#psql: could not connect to server: Connection refused
Is the server running on host "server" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
Exited with code 2
Files
The docker-compose.yml looks like this
version: '2'
services:
server:
image: postgres:9.5.12-alpine
networks:
- internal
expose:
- '5432'
client:
build:
context: .
networks:
- internal
depends_on:
- server
networks:
internal:
driver: bridge
where the client is built from a dockerfile like this
FROM alpine:3.7
RUN apk --no-cache add postgresql-client && rm -rf /var/cache/apk/*
Note
If I repeat everything on my Linux (also with docker-in-docker) it works.
But I guess some things work completely different on circleci.
I found some people mentioning that on circleci networking and bind mounts can be tricky but I didn't find anything that can help me.
There is this doc but I thought I am doing this already.
Then there is this project where someone seems to do the same thing on circleci successfully.
But I cannot figure out what's different there...
Anyway I would really appreciate your help. So far I have given up on this.
Best
Marc
Ok, in the meanwhile I (no actually it was halfer from the circleci forum) noticed that docker-compose run client psql -h server -p 5432 -U postgres -c "\l" was run before the server was up and running. A simple sleep 5 after docker-compose up -d fixes the problem.