I am getting port issue in docker compose.
My compose file is:
version: '3.1'
services:
db:
hostname: postgres
build:
context: .
dockerfile: Dockerfile.postgres
restart: always
ports:
- "5432:5432"
test:
build:
context: ../..
dockerfile: Dockerfile.test
environment:
DB_URL: "jdbc:postgresql://postgres:5432/postgres?user=postgres&password=postgres"
depends_on:
- postgres
My docker file for test is
FROM openjdk:8
RUN \
curl -L -o sbt-1.2.8.deb http://dl.bintray.com/sbt/debian/sbt-1.2.8.deb && \
dpkg -i sbt-1.2.8.deb && \
rm sbt-1.2.8.deb && \
apt-get update && \
apt-get install sbt && \
sbt sbtVersion
ENV WORK_DIR="/test"
USER root
COPY . ${WORK_DIR}/
RUN cd ${WORK_DIR} && \
sbt test
While running it i am getting:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
TEST_DB_URL is an env variable in application.conf.
I am running the individual commands:
docker compose -f docker-compose.yml up db
It works fine and docker ps show
0.0.0.0:5432->5432/tcp
But when i run docker compose -f docker-compose.yml up test i downloads dependencies and compiles the code and give me Connection to localhost:5432 refused.
Looks like this has nothing to do with connecting to the database, but is a configuration issue. It's using the default from the application.conf instead of using the override from the environment. You could try to narrow it down by simply printing the config in Scala and figuring out why the ConfigFactory doesn't pick up the environment variable.
Related
Goal:
Run Postgres in docker by pulling postgres from docker hub (https://hub.docker.com/_/postgres)
Background:
I get a message when I tried running docker with postgres
Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD to a non-empty value for the
superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
I got info at https://andrew.hawker.io/dailies/2020/02/25/postgres-uninitialized-error/ about why.
Problem:
"Update your docker-compose.yml or corresponding configuration with the POSTGRES_HOST_AUTH_METHOD environment variable to revert back to previous behavior or implement a proper password." (https://andrew.hawker.io/dailies/2020/02/25/postgres-uninitialized-error/)
I don't understand the solution about how to solve the current situation.
Where can i find the dokcer-compose.yml?
Info:
*I'm newbie in PostGre and Docker
If you need to run PostgreSQL in docker you will have to use a variable in docker run command like this :
$ docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
The error is telling you the same.
Read more at https://hub.docker.com/_/postgres
Docker-compose.yml is just another option. You can run it just by docker run like in first answer. If you want use docker-compose, in documentation is example of it:
stack.yaml
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 8080:8080
and run: docker-compose -f stack.yml up.
Everything is here:
https://hub.docker.com/_/postgres
I have a docker-compose.yml with two services, a custom image (the service called code) and a Postgres server. Below I attach the Dockerfile to built the image called app of the first service and next the docker-compose.yml:
# Dockerfile of custom image
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
WORKDIR /usr/app
COPY ./* ${PWD}/
ADD ./requirements.txt ./
RUN pip install -r requirements.txt
ADD ./ ./
# docker-compose.yml
version: '3.2'
services:
code:
image: app:latest
ports:
- 5001:5001
networks:
- docker-elk_elk
postgres:
image: postgres:9.5-alpine
environment:
POSTGRES_USER: postgres # define credentials
POSTGRES_PASSWORD: postgres # define credentials
POSTGRES_DB: postgres # define database
ports:
- 5432:5432 # Postgres port
networks:
- docker-elk_elk
networks:
docker-elk_elk:
external: true
Also here docker-elk_elk points to another network where another docker-compose stack runs and I want the above docker-compose to join too. However when I run docker-compose run code bash and obtain a shell in code service, a curl https://postgres:5432 gives the following message : curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to postgres:5432. I've tried also curl http://postgres:5432 which returned curl: (52) Empty reply from server. Furthermore, the docker-elk_elk network (clearly created by elasticsearch-logtash-kibaba stack) when doing docker network ls gives
NETWORK ID NAME DRIVER SCOPE
8a54fe394fe8 docker-elk_elk bridge local
I'm really lost and confused, can someone help me out? If there is any piece of info that might be necessary or helpful and wasn't included above let me know please.
I forgot to mention that app is just a simple python application (not a web app or other python sophisticated libraries).
P.S. Something that perhaps I should have mentioned above. What I want to do, is using the ubuntu container with the app inside to query (and send data) both to the postgres and Elasticsearch (which is in the other docker-compose stack) db.
We have a web app using postgresql DB, being deployed to tomcat on CentOS7 env. We are using docker (and docker-compose), and running on an Azure visual machine.
We cannot pre-set the admin user 'postgres' password (e.g. mysecret) during the docker/docker-compoase build process.
We have tried to use the environments: setting from the docker-compose.yml file, also the ENV in the ./portgres Dockerfile file. Neither works.
I had to manually use 'docker exec -it /bin/bash' to run the psql command to set the password. We like to avoid the manual step.
$ cat docker-compose.yml
version: '0.2'
services:
app-web:
build: ./tomcat
ports:
- "80:80"
links:
- app-db
app-db:
build: ./postgres
environment:
- "POSTGRES_PASSWORD=password"
- "PGPASSWORD=password"
expose:
- "5432"
$ cat postgres/Dockerfile
FROM centos:7
RUN yum -y install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm
RUN yum -y install postgresql96 postgresql96-server postgresql96-libs postgresql96-contrib postgresql96-devel
RUN yum -y install initscripts
USER postgres
ENV POSTGRES_PASSWORD=mysecret
ENV PGPASSWORD=mysecret
RUN /usr/pgsql-9.6/bin/initdb /var/lib/pgsql/9.6/data
RUN echo "listen_addresses = '*'" >> /var/lib/pgsql/9.6/data/postgresql.conf
RUN echo "PORT = 5432" >> /var/lib/pgsql/9.6/data/postgresql.conf
RUN echo "local all all trust" > /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all 127.0.0.1/32 trust" >> /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all ::1/128 ident" >> /var/lib/pgsql/9.6/data/pg_hba.conf
RUN echo "host all all 0.0.0.0/0 md5" >> /var/lib/pgsql/9.6/data/pg_hba.conf
EXPOSE 5432
ENTRYPOINT ["/usr/pgsql-9.6/bin/postgres","-D","/var/lib/pgsql/9.6/data","-p","5432"]
Web app deployment fails, db authentication error with wrong password (the password 'mysecret' is defined in the web app JPA persistence.xml). Assuming the password was not properly set (default initdb does not set the password)
Then, manually change the password using the above mentioned docker exec command, everything works.
Like to set the password during the docker build time. Based on Postgres/Docker documentation and some threads, either environments: from docker-compose setting or ENV from the docker file would work. Neither works for us.
I need to use MongoDB with Docker. Up until now i am able to create a container, start the Mongo server and access it from the host machine (via Compass).
What i want to do next is import data from a script into the Mongo database that is running in the container.
I'm getting the following error when trying to import the data:
Failed: error connecting to db server: no reachable servers
Where's what i'm doing...
docker-compose.yml:
version: '3.7'
services:
mongodb:
container_name: mongodb_db
build:
context: .
dockerfile: .docker/db/Dockerfile
args:
DB_IMAGE: mongo:4.0.9
ports:
- 30001:27017
environment:
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
db_seed:
build:
context: .
dockerfile: .docker/db/seed/Dockerfile
args:
DB_IMAGE: mongo:4.0.9
links:
- mongodb
mongodb Dockerfile:
ARG DB_IMAGE
FROM ${DB_IMAGE}
CMD ["mongod", "--smallfiles"]
db_seedDockerfile:
ARG DB_IMAGE
FROM ${DB_IMAGE}
RUN mkdir -p /srv/tmp/import
COPY ./app/import /srv/tmp/import
# set working directory
WORKDIR /srv/tmp/import
RUN mongoimport -h mongodb -d dbName--type csv --headerline -c categories --file=categories.csv #Failed: error connecting to db server: no reachable servers
RUN mongo mongodb/dbName script.js
What am I doing wrong here? How can i solve this issue?
I would like to keep the current file organisation (docker-compose, mongodb Dockerfile and db_seed Dockerfile).
I found the reason for the issue. The import command is being executed prior to the mongo service being started.
To solve this issue i created a .sh script with the import commands and execute it using ENTRYPOINT. This way the script is only executed after the db_seed container is created. Since the the db_seed container depends on the mongobd container, it will only execute the script after the mongo service is started.
db_seedDockerfile
ARG DB_IMAGE
FROM ${DB_IMAGE}
RUN mkdir -p /srv/tmp/import
COPY ./app/import /srv/tmp/import
ENTRYPOINT ["./srv/tmp/import/import.sh"]
this is a circleci question I guess.
I am quite happy with circleci but now I ran into a problem and I don't know what I'm doing wrong.
Maybe this is something very easy, but I don't see the it.
In short
I can't make containers talk to each other on circleci.
Problem
Basically what I wanted to do is start a server container and a client container, and then let them talk to each other.
I created a minimal example here: https://github.com/mRcSchwering/circleci-integration-test
The README.md basically explains the desired outcome.
I have a .circleci/config.yml like this:
version: 2
jobs:
build:
docker:
- image: docker:18.03.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install docker-compose
command: |
apk --update add py2-pip
/usr/bin/pip2 install docker-compose
docker-compose --version
- run:
name: Start Container
command: |
docker-compose up -d
docker-compose ps
- run:
name: Let client talk to server
command: |
docker-compose run client psql -h server -p 5432 -U postgres -c "\l"
In a docker container, docker-compose is installed, which is then used to start a server and a client (postgres here). In the last step I am telling the client to query the server. However, it cannot find the server:
#!/bin/sh -eo pipefail
docker-compose run client psql -h server -p 5432 -U postgres -c "\l"
Starting project_server_1 ...
^#^#psql: could not connect to server: Connection refused
Is the server running on host "server" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
Exited with code 2
Files
The docker-compose.yml looks like this
version: '2'
services:
server:
image: postgres:9.5.12-alpine
networks:
- internal
expose:
- '5432'
client:
build:
context: .
networks:
- internal
depends_on:
- server
networks:
internal:
driver: bridge
where the client is built from a dockerfile like this
FROM alpine:3.7
RUN apk --no-cache add postgresql-client && rm -rf /var/cache/apk/*
Note
If I repeat everything on my Linux (also with docker-in-docker) it works.
But I guess some things work completely different on circleci.
I found some people mentioning that on circleci networking and bind mounts can be tricky but I didn't find anything that can help me.
There is this doc but I thought I am doing this already.
Then there is this project where someone seems to do the same thing on circleci successfully.
But I cannot figure out what's different there...
Anyway I would really appreciate your help. So far I have given up on this.
Best
Marc
Ok, in the meanwhile I (no actually it was halfer from the circleci forum) noticed that docker-compose run client psql -h server -p 5432 -U postgres -c "\l" was run before the server was up and running. A simple sleep 5 after docker-compose up -d fixes the problem.