https://docs.gitlab.com/ee/install/docker.html
I am reading documentation and I am curious:
Should it be similar to :22. Like this: :80
The following lines configure GitLab for HTTP port 8929 and SSH port 2224 within the container.
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url "http://gitlab.example.com:8929"
gitlab_rails["gitlab_shell_ssh_port"] = 2224
The ports: section of the docker-compose.yml file maps a host port to a container port in the form: - 'HOST_PORT:CONTAINER_PORT'.
In this particular example, the HTTP port is being mapped to host port 8929, and the GitLab Shell SSH port is mapped to host port 2224.
See, Docker Docs: Compose file reference, ports.
Related
I need to connect to a Postgres server instance that is running as a SystemD service from within a docker-compose file.
docker-compose containers ---> postgres as systemd
This is about setting up Airflow with an external Postgres DB that is on localhost.
I've taken the docker-compose example with:
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.2.3/docker-compose.yaml'
However in there they are defining a Postgres container where Airflow connects to by resolving the postgres host within the Docker network.
But I already have Postgres running on the machine via SystemD, I can check its status with:
# make sure the service is up and running
systemctl list-units --type=service | grep postgres.*12
# check the process
ps aux | grep postgres.*12.*config_file
# check the service details
systemctl status postgresql#12-main.service
AFAIU inside the docker-compose YAML file I need to use the feature host.docker.internal so the Docker service makes the docker container find their way out of the Docker network and find localhost with the SystemD services e.g. Postgres.
I've setup the Airflow YAML file for docker-compose with:
---
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.3}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: LocalExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#host.docker.internal/airflow
...
extra_hosts:
- "host.docker.internal:host-gateway"
There's a lot of stuff going on there, but the point is that the SQLAlchemy connection string is using host.docker.internal as host.
Now when I invoke the command docker-compose -f airflow-local.yaml up airflow-init, then I see in the ouput logs that Airflow is complaining it does not find the Postgres server:
airflow-init_1 | psycopg2.OperationalError: connection to server at "host.docker.internal" (172.17.0.1), port 5432 failed: Connection refused
airflow-init_1 | Is the server running on that host and accepting TCP/IP connections?
It might be an issue with the DNS resolution between the Docker special network and the OS network, not sure how to troubleshoot this.
How do I make Docker container to find out SystemD services that serve on localhost?
Turns out I just need to use network_mode: host in the YAML code for the Docker container definition (a container is a service in docker-compose terminology).
This way the Docker virtual network is somehow bound to the laptop networking layer ("localhost" or "127.0.0.1"). This setup is not encouraged by the Docker people, but sometimes things are messy when dealing with legacy systems so you have to work around what has been done in the past.
Then you can use localhost to reach the Postgres DB running as a SystemD service.
The only caveat is that someone can not use port mappings when using network_mode: host, otherwise docker-compose complains with the error message:
"host" network_mode is incompatible with port_bindings
So you have to remove the YAML part similar to:
ports:
- 9999:8080
and sort out the ports (TCP sockets) in a different way.
In my specific scenario (Airflow stuff), I've done the following:
For the host/networking that makes the Airflow webserver (docker container/service) reach the Postgres DB (SystemD service/daemon) on localhost:
# see the use of "localhost"
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#localhost/airflow
For the TCP port, in the docker-compose YAML service definition for the Airflow webserver I specified the port:
command: webserver -p 9999
I have a database on Crate DB. The database program is started through docker-compose.yml file. It is running on http://192.168.99.100:4200 (this is the docker machine's IP with the Crate's port)
I want to connect the Crate DB with Power BI. When I try to configure the PostgreSQL ODBC Driver, I don't know what to type on "Server" field.
So far I've tried "localhost", "127.0.0.1","0.0.0.0", "192.168.99.100" but none of these works.
So my question is, which IP address should I type on the "server" field?
The setup seems correct. Make sure the port 5432 is correctly published. Assuming you're using the official cratedb image, the exposed ports in Dockerfile used to assemble the image are the following:
# http: 4200 tcp
# transport: 4300 tcp
# postgres protocol ports: 5432 tcp
EXPOSE 4200 4300 5432
Therefore, in order to access those services remotely you have to publish their corresponding ports. In docker-compose.yml configure the port mappings if you haven't already done so:
version: "3.5"
services:
cratedb:
image: crate
ports:
- 5432:5432
- 4200:4200
- 4300:4300
More about port mappings in the ports section of the Compose file reference. Now you should be able to connect using the ODBC PostgreSql driver to the IP address of the host (i.e. 192.168.99.100) and port 5432.
In alternative you could run the container with port bindings:
docker run -d -p 4200:4200 -p 5432:5432 -p 4300:4300 crate
If you still can connect to the database, check the firewall settings.
I'm running a docker-compose container for postgresql. The postgresql database in the container is running on the standard port 5432, and I am publishing that port out to port 5444 on the host (since the host's postgresql default port is in use).
I am using the same configuration on the host and in the container (a .env file that provides config settings for cli commands and the app as a whole). Unfortunately, whichever port I choose, one system will lose access. For example, I cannot run:
[k#host]$ pgsql -p 5444 # Connects
in the host and still have work inside the container:
[k#db-container]$ pgsql -p 5444 # Errors in container
The container's postgresql-server is running on 5432:
[k#db-container]$ pgsql -p 5432 # Connects successfully in container
and the ports are published via the docker-compose.yml via:
- ports:
- "5444:5432"
So currently I don't know how to configure the same port everywhere simply via the docker-compose.yml! expose command exposes the port but does not allow remapping, ports forwards host<-->container, but does not remap the internal port. I have thought of remapping the postgresql default port inside the postgresql container configuration, but fully reconfiguring postgresql seems non-trivial to do via docker-compose on every docker-compose up.
How can I remap the ports inside the container so that I can use port 5444 everywhere, in the host & container?
The standard PostgreSQL client library supports several environment variables that tell it where the server is. In the same way that you can configure the host using $PGHOST, you can configure the port using $PGPORT.
In a Docker Compose context, it should be straightforward to set these:
version: '3'
services:
postgres:
image: postgres:11
ports: ['5444:5432']
volumes: ['./postgres:/var/lib/postgresql/data']
myapp:
build: .
ports: ['8888:8888']
env:
PGHOST: postgres
# default PGPORT=5432 will work fine
Similarly, if you're running the application in a development environment on the host, you can set
PGHOST=localhost PGPORT=5444 ./myapp
You can't use the same port on the host.
Nothing prevents you to run multiple instances if they has different IP-addresses.
psql -p 5444
Defaults to psql --host=127.0.0.1 -p 5444. If you want several instances - obviously you must make them different in some way.
I run into the strange problem. I've created docker-compose file to build php + nginx + postgres services:
version: '2'
services:
db:
image: orchardup/postgresql
ports:
- "5433:5432"
environment:
LC_ALL: C.UTF-8
POSTGRESQL_USER: postgres
POSTGRESQL_DB: db
POSTGRESQL_PASS: postgres
php:
build: .docker/php-fpm
ports:
- "9002:9002"
volumes:
- .:/var/www/symfony:cached
- ./var/log/symfony:/var/www/symfony/var/log:cached
links:
- db
nginx:
build: .docker/nginx
ports:
- "8001:80"
links:
- php
volumes_from:
- php
volumes:
- ./var/log/nginx/:/var/log/nginx:cached
After that I created DB schema by running bin/console doctrine:schema:update --force . The tables and migrations created just fine. Seems like DB connection is ok. I checked this by connecting to db from my machine through psql with credentials from .env, the tables are there.
But when I go to the web page and trying to authorize, I get an error told me the connection is not ok:
Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5433?"
I checked in both case I have dev environment - from the web page and from the console. I tried 5433 and 5432 ports with no success. I tried everything I could find for 3 hours.
This is the output from the postgres container:
# netstat -tlpn | grep 5432
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 12/postgres
tcp6 0 0 :::5432 :::* LISTEN 12/postgres
# grep listen /etc/postgresql/9.3/main/postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
The only way for containers to talk to each other is through IPs. By linking multiple containers together through --link (or links in docker-compose), docker creates a secure tunnel between those two containers so that we don't need to expose any ports externally.
If you try to connect to your database from your local environment through a database client, you will be able to connect to it from 127.0.0.1:5433 as the port is exposed to your host through the docker-compose file. This is the reason why your schema update command succeeded.
Docker exposes connectivity information for the source container to
the recipient container in two ways:
Environment variables,
Updating the /etc/hosts file.
Ref: https://docs.docker.com/network/links/#communication-across-links
In order to connect to your database (which is running in the db container) from the php container, you will need to get the host of your db container through the environment variable DB_PORT_5432_TCP_ADDR (I might be wrong on this, but type env in your php container's terminal to verify. You will need to SSH into your php container).
Alternatively, you can use the second method, which is just db as the hostname instead of 127.0.0.1 since docker updated the /etc/hosts file in the php container to map your linked container's name to its IP, and in this case, the value mapped to the hostname db is the same as the value stored in the environment variable DB_PORT_5432_TCP_ADDR.
How to expose multiple ports in docker-compose.yml for one container? For example, I need to expose port for postgresql container and if 5432 is occupied (by local postgresql) than set it to the next one in range 5432-5442. Is it possible?
In your compose you can expose ports using range.
ports:
- "5432-5442:5432"
Or according docker compose docs
ports:
- "5432"
This will pick up a random port on the host machine and map it to 5432.