Can't connect to postgres db from pgadmin (both running on docker)? - postgresql

I'm running postgres and pgadmin4 on docker with docker-compose up on a fedora 28 OS and I'm having trouble creating a new db server from pgadmin's web console.
This is the docker-compose.yml file I'm using.
version: '3.0'
services:
db:
image: postgres:9.6
ports:
- 5432:5432/tcp
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=mydb
pgadmin:
image: dpage/pgadmin4
ports:
- 5454:5454/tcp
environment:
- PGADMIN_DEFAULT_EMAIL=admin#mydomain.com
- PGADMIN_DEFAULT_PASSWORD=postgres
- PGADMIN_LISTEN_PORT=5454
What should I write in the Create new server > Connection tab > "Host name/address" field? If I type in localhost or 127.0.0.1 I get an error (Unable to connect, see screenshot1 and screenshot2). If I type db (the service's name as specified in the yml file), only then pgadmin accepts it and creates a db server with a postgres database called mydb.
Why? How do I find the ip that goes in the address field?
Furthermore, on Fedora28:
$ netstat -napt | grep LIST
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 192.168.122.1:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3350 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3389 0.0.0.0:* LISTEN -
tcp6 0 0 :::5454 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 :::5432 :::* LISTEN -
$

I encountered this problem just recently too. There are two approaches I found:
1) See here. Basically, you just search for the IP address of the postgres container and use that IP address in pgadmin4:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
194d4a5f9dd0 dpage/pgadmin4 "/entrypoint.sh" 48 minutes ago Up 48 minutes 443/tcp, 0.0.0.0:8080->80/tcp docker-postgis_pgadmin_1
334d5bdc87f7 kartoza/postgis:11.0-2.5 "/bin/sh -c /docke..." 48 minutes ago Up 48 minutes (healthy) 0.0.0.0:5432->5432/tcp docker-postgis_db_1
In my case, the postgres container ID is 334d5bdc87f7. Then look for the IP address:
$ docker inspect 334d5bdc87f7 | grep IPAddress
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.18.0.2",
When I used 172.18.0.2 in pgadmin4, I connected to the database! Yey!
2) The 2nd approach is easier. Instead of using localhost or 127.0.0.1 or ::1, I used my IP address in my local network (e.g. in your case 192.168.122.1?). Afterwards, I connected to the postgres container!

From my reading of the docs and testing this myself you're doing it right using the database service name from the docker-compose yml file as the "Host name/address" field value in pgAdmin.
https://docs.docker.com/compose/networking/
In your "pgadmin" section I would use port values of 8080 or 80, why 5454?

Related

Connection to remote postgresql host running docker refused

I have an instance of postgresql running in a docker container.
I can connect to the database from the host that is running docker by:
docker exec -u root -it postgres bash
And then accessing the database from there by doing an su to user postgres.
If I use a client from a desktop pc / laptop to try and connect I get a connection refused:
psql -h 20.XXX.1XX.1XX -p 5432 -U <user>
psql: could not connect to server: Connection refused
Is the server running on host "20.XXX.1XX.1XX" and accepting
TCP/IP connections on port 5432?
I have edited the pg_hba.conf file in the docker instance and added the following:
host all all 0.0.0.0/0 md5
host all all ::/0 md5
If I run netstat, again within the container I get:
root#ee9dg39913cdc:/# netstat -na | grep 5432
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN
tcp6 0 0 :::5432 :::* LISTEN
unix 2 [ ACC ] STREAM LISTENING 52651 /var/run/postgresql/.s.PGSQL.5432
And when I run it on the machine hosting the docker instance:
root#VM01:~# netstat -na | grep 5432
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
I do not have ufw running at all, so, no firewall issues. The host is an Azure VM and port 5432 is open to the internet.
postgresql.conf is set as:
listen_addresses = '*'
Given all of the above, can anyone help me understand why I cannot connect to the postgres instance over the internet using:
psql -h 20.XXX.1XX.1XX -p 5432 -U <user>
Thanks.

unsupported port number: 0

If we specify single port in dockerfile or docker-compose file like below
sshd:
build: ./backend/mock/sshd
volumes:
- ./docker/sftp_upload_dir:/root/upload_dir
ports:
- '22'. #<----------
and use the docker-compose file with nerdctl using command
nerdctl compose up
then nerdctl command will exit with following error
FATA[0000] unsupported port number: 0
As per docker documentation https://docs.docker.com/compose/compose-file/compose-file-v3/#ports
There are three options:
Specify both ports (HOST:CONTAINER)
Specify just the container port (an ephemeral host port is chosen for the host port).
Thus 0 is chosen as the host port which creates error, so solution is to explicitly specify host port as below
sshd:
build: ./backend/mock/sshd
volumes:
- ./docker/sftp_upload_dir:/root/upload_dir
ports:
- '22:22' #<<<<---------
Note that I have explicitly added 22: before 22 in the last line to make it work with nerdctl. It works by default with docker-compose up.

GitLab-CE installation using Docker-Compose file - Ssh git user asking password

I have successfully installed GitLab-CE edition using Docker-Compose file as per below link on my Docker host server.
https://docs.gitlab.com/omnibus/docker/#install-gitlab-using-docker-compose
My docker-compose.yml content as follows.
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
gitlab_rails['gitlab_shell_ssh_port'] = 2223
ports:
- '80:80'
- '443:443'
- '2223:22'
volumes:
- '$GITLAB_HOME/gitlab/config:/etc/gitlab'
- '$GITLAB_HOME/gitlab/logs:/var/log/gitlab'
- '$GITLAB_HOME/gitlab/data:/var/opt/gitlab'
In one of my Ubuntu client system when I run ssh -T -p 2223 git#gitlab.example.com it works (It shows Welcome to GitLab). Whereas in my docker host if I remove the following - gitlab_rails['gitlab_shell_ssh_port'] = 2223 in my gitlab.rb file after running gitlab-ctl reconfigure. Again if i run ssh -T git#gitlab.example.com it is asking git user's password.
Docker-Server's 22 port listening as below.
netstat -tulnp | grep :22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1516/sshd
tcp6 0 0 :::2222 :::* LISTEN 2587/docker-proxy
tcp6 0 0 :::22 :::* LISTEN 1516/sshd
Since sshd service is already running on 22 port on my docker server, So is there any way to use 22 default port to clone our GitLab repository? Without changing sshd default port. Like below example command i would want to clone. So any suggestion would be helpful.
git clone git#gitlab.example.com:sample-group/sample.git
#Exadra37, i have updated the data you shared and tried to rebuild, but it fails.
[root#gitlab]# docker-compose up --build
WARNING: The SSH_AUTH_SOCK variable is not set. Defaulting to a blank string.
Recreating 486bb3cb8496_docker-gitlab-ce_web_1 ... error
ERROR: for 486bb3cb8496_docker-gitlab-ce_web_1 Cannot create container for service web: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: for web Cannot create container for service web: create .: volume name is too short, names should be at least two alphanumeric characters
ERROR: Encountered errors while bringing up the project.
You need to map your ssh socket into the container.
Add the following to your docker-compose.yml file:
environment:
- SSH_AUTH_SOCK=/ssh-agent
volumes:
- $SSH_AUTH_SOCK:/ssh-agent

Unable to consume or produce to a remote Kafka broker

I have setup a simple droplet on Digital Ocean and am running a single Kafka and Zookeeper node which is started using a docker-compose file.
I am running into an issue with consuming or producing to the Kafka broker from outside of the Digital Ocean droplet.
This is what my docker-compose looks like,
version: '3.4'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
volumes:
- /root/data/zookeeper/etc:/etc/zookeeper
- /root/data/zookeeper/data:/var/lib/zookeeper/data
container_name: "zookeeper"
network_mode: "host"
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: PUBLIC_DIGITIAL_OCEAN_IP:2181
KAFKA_ADVERTISED_LISTENERS:PLAINTEXT://PUBLIC_DIGITIAL_OCEAN_IP:9093
KAFKA_LISTENER: PUBLIC_DIGITIAL_OCEAN_IP:9093
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=WARN"
KAFKA_LOG4J_ROOT_LOGLEVEL: WARN
KAFKA_TOOLS_LOG4J_LOGLEVEL: ERROR
volumes:
- /root/data/kafka/etc:/etc/kafka
- /root/data/kafka/data:/var/lib/kafka/data
container_name: "kafka"
network_mode: "host"
I have tried different combinations with setting KAFKA_ADVERTISED_LISTENERS to use localhost, 0.0.0.0 and I am not having any success.
I can consume and produce if I enter the kafka container and use the CLI.
From what I have read, digital ocean does not have any firewall rules so the the ports are being exposed.
snippet from running netstat within the droplet
> netstat -tulpn | grep :2181
> tcp6 0 0 :::2181 :::* LISTEN 10522/java
> netstat -tulpn | grep :9093
> tcp6 0 0 :::9093 :::* LISTEN 13093/java
Any help is greatly appreciated!
The issue was my firewall rules on the droplet. running the commands;
sudo ufw allow 2181 && sudo ufw allow 9092 resolved my issue.

Could not connect to Postgres from Symfony 4 + Docker

I run into the strange problem. I've created docker-compose file to build php + nginx + postgres services:
version: '2'
services:
db:
image: orchardup/postgresql
ports:
- "5433:5432"
environment:
LC_ALL: C.UTF-8
POSTGRESQL_USER: postgres
POSTGRESQL_DB: db
POSTGRESQL_PASS: postgres
php:
build: .docker/php-fpm
ports:
- "9002:9002"
volumes:
- .:/var/www/symfony:cached
- ./var/log/symfony:/var/www/symfony/var/log:cached
links:
- db
nginx:
build: .docker/nginx
ports:
- "8001:80"
links:
- php
volumes_from:
- php
volumes:
- ./var/log/nginx/:/var/log/nginx:cached
After that I created DB schema by running bin/console doctrine:schema:update --force . The tables and migrations created just fine. Seems like DB connection is ok. I checked this by connecting to db from my machine through psql with credentials from .env, the tables are there.
But when I go to the web page and trying to authorize, I get an error told me the connection is not ok:
Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5433?"
I checked in both case I have dev environment - from the web page and from the console. I tried 5433 and 5432 ports with no success. I tried everything I could find for 3 hours.
This is the output from the postgres container:
# netstat -tlpn | grep 5432
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 12/postgres
tcp6 0 0 :::5432 :::* LISTEN 12/postgres
# grep listen /etc/postgresql/9.3/main/postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
The only way for containers to talk to each other is through IPs. By linking multiple containers together through --link (or links in docker-compose), docker creates a secure tunnel between those two containers so that we don't need to expose any ports externally.
If you try to connect to your database from your local environment through a database client, you will be able to connect to it from 127.0.0.1:5433 as the port is exposed to your host through the docker-compose file. This is the reason why your schema update command succeeded.
Docker exposes connectivity information for the source container to
the recipient container in two ways:
Environment variables,
Updating the /etc/hosts file.
Ref: https://docs.docker.com/network/links/#communication-across-links
In order to connect to your database (which is running in the db container) from the php container, you will need to get the host of your db container through the environment variable DB_PORT_5432_TCP_ADDR (I might be wrong on this, but type env in your php container's terminal to verify. You will need to SSH into your php container).
Alternatively, you can use the second method, which is just db as the hostname instead of 127.0.0.1 since docker updated the /etc/hosts file in the php container to map your linked container's name to its IP, and in this case, the value mapped to the hostname db is the same as the value stored in the environment variable DB_PORT_5432_TCP_ADDR.