TeamCity not connecting to database while running in swarm mode - postgresql

I'm trying to run a TeamCity in a Docker Swarm but it can not authenticate with the external AWS RDS Postgres instance. The strange part to me is that this issue does not occur when running as docker-compose. I've run this locally and on an AWS EC2 to double check that it is not related to something going on with the machine. Both times I get the same results.
The error message is:
Could not connect to PostgreSQL server.
the connection attempt failed.: org.postgresql.util.PSQLException: The connection attempt failed. Caused by: java.net.UnknownHostException: rds_url.com
docker-compose.yaml:
version: "3"
services:
teamcity-server:
privileged: true
image: jetbrains/teamcity-server:2020.2.2
hostname: teamcity-server
ports:
- "8111:8111"
- "5432:5432"
volumes:
- ./data_dir:/data/teamcity_server/datadir
- ./log_dir:/opt/teamcity/logs
teamcity-agent:
privileged: true
image: jetbrains/teamcity-agent:2020.2.2
environment:
- SERVER_URL=http://teamcity-server:8111
- AGENT_NAME=regular_agent
- DOCKER_IN_DOCKER=start
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
teamcity-minimal-agent:
privileged: true
image: jetbrains/teamcity-minimal-agent:2020.2.2
environment:
- SERVER_URL=http://teamcity-server:8111
- AGENT_NAME=minimal_agent
- DOCKER_IN_DOCKER=start
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"

Talking with TeamCity support I was able to figure it out. In short I needed to set the DNS server to the VPC DNS server. I also set the network mode to host.
dns: 169.254.169.253
network_mode: host
Locally I never solved it. To hit my rds server I need to be on a VPN and docker swarm was unable to resolve the IP address with the vpn running.

Related

connect to docker postgresql from remote grafana

I have a linux virtual machine remotely on the cloud hosted at digitalocean, this machine has grafana installed. Locally I have docker and I launched a postgresql server with the following docker-compose.yml:
version: '3.8'
services:
timescale:
image: timescale/timescaledb-ha:pg14-latest
container_name: timescaledb
ports:
- "5432:5432"
volumes:
- timescale-volume:/home/postgres/pgdata/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
networks:
- trade-net
networks:
trade-net:
external: true
volumes:
timescale-volume:
external: true
Upon checking my network with network inspect trade-net I get:
"IPv4Address": "172.22.0.3/16"
I would like to connect via grafana now to my postgresql docker container which has been launched from my local machine, the grafana options are the following ones:
I have tried to fill this with :
Host: 172.22.0.3:5432
Database: postgres
User: postgres Password: password
But the connection is never established.
One thing to note is that my postgresql.conf file has :
listen.addresses = '*'
The IP address from the docker network is available only through your local PC.
To access your container remotely you need your computer's public IP address.
Try running
curl ifconfig.me
It will return your public IP.
You should also check your router's firewall, to make sure it allows connections to port 5432

How to use fluent-bit with Docker-compose

I want to use the fluent-bit docker image to help me persist the ephemeral docker container logs to a location on my host (and later use it to ship logs elsewhere).
I am facing issues such as:
Cannot start service clamav: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection refused
I have read a number of post including
configuring fluentbit with docker but I'm still at a lost.
My docker-compose is made up of nginx, our app, keycloak, elasticsearch and clamav. I have added fluent-bit, made it first to starts via depends on. I changed the other services to use the fluentd logging driver.
Part of config:
clamav:
container_name: clamav-app
image: tiredofit/clamav:latest
restart: always
volumes:
- ./clamav/data:/data
- ./clamav/logs:/logs
environment:
- ZABBIX_HOSTNAME=clamav-app
- DEFINITIONS_UPDATE_FREQUENCY=60
networks:
- iris-network
expose:
- "3310"
depends_on:
- fluentbit
logging:
driver: fluentd
fluentbit:
container_name: iris-fluent
image: fluent/fluent-bit:latest
restart: always
networks:
- iris-network
volumes:
- ./fluent-bit/etc:/fluent-bit/etc
ports:
- "24224:24224"
- "24224:24224/udp"
I have tried to proxy_pass 24224 to fluentbit in nginx and start nginx first, and that avoided the error on clamav and es, but same error with keycloak.
So how can I configure the service to use the host or is it that localhost is not the "external" host?

How to connect Adminer when running in Docker container and DB is Postgres AWS RDS?

I'm trying to connect to adminer on port 8080 that is running in a docker container which is running in an EC2 instance. The database is running in an AWS RDS instance. Both are running in the same VPC. I can connect via the cli to the RDS instance but I can't use my browser to connect to RDS through adminer running domain.com:8080.
I don't know if the security groups are setup correctly and I don't really know what needs to be added to the security groups. Any advice? Thanks in advance.
adminer:
build:
context: ../
dockerfile: ./path/to/adminer/Dockerfile
ports:
- "8080:8080"
environment:
- POSTGRES_CONNECTION=psql
- POSTGRES_HOST=**********
- POSTGRES_DB=**********
- POSTGRES_USER=**********
- POSTGRES_PASSWORD=**********
- POSTGRES_PORT=5432
networks:
- backend

Docker-compose: App can not connect to Postgres container

I'm unable to get my Phoenix app connecting to the Postgres container when using docker-compose up.
My docker-compose.yml:
version: '3.5'
services:
web:
image: "solaris_cards:latest"
ports:
- "80:4000"
env_file:
- config/docker.env
depends_on:
- db
db:
image: postgres:10-alpine
volumes:
- "/var/lib/postgresql/data/pgdata/var/lib/postgresql/data"
ports:
- "5432:5432"
env_file:
- config/docker.env
The application running in web container complains that a connection to the Postgres container is non-existing:
[error] Postgrex.Protocol (#PID<0.2134.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (db:5432): non-existing domain - :nxdomain
My env variables:
DATABASE_HOST=db
DATABASE_USER=postgres
DATABASE_PASS=postgres
I have tried running the Postgres container first separately and then running the web container but still have the same problem.
If I change the database host to 0.0.0.0 (which is what Postgres shows when running), then it seems to connect but the connection is refused rather than not found.
However docker should be able to translate the host name with out me manually inputing the ip.
Postgres was exiting due to its volume already containing data.
This was solved by cleaning the directory with:
docker-compose down -v

docker link resolves to localhost

I'm stuck on a very strange docker problem that I've not encountered before. What I want to do is to use docker-compose to make my application available from the internet. It's currently running on a instance on DigitalOcean and I'm currently working with the following docker-compose.yml:
version: '2.2'
services:
mongodb:
image: mongo:3.4
volumes:
- ./mongo:/data/db
ports:
- "27017"
mongoadmin: # web UI for mongo
image: mongo-express
ports:
- "8081:8081"
links:
- "mongodb:mongo"
environment:
- ME_CONFIG_OPTIONS_EDITORTHEME=ambiance
- ME_CONFIG_BASICAUTH_USERNAME=user
- ME_CONFIG_BASICAUTH_PASSWORD=pass
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
links:
- "mongodb:mongodb"
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"
Mongoadmin connects properly and is able to connect to the database, while the database itself cannot be connected to from outside the host.
The problem is that the app won't connect to the right address. It is a express server using mongoose to connect to the database. Before connecting I'm logging the url it will connect to. In my config.js I've listed mongodb://mongodb/project, but this is resolved to localhost thus resulting in MongoError: failed to connect to server [localhost:27017] on first connect. The name of the container is resolved, but not to the proper address.
I've tried to connect to the IP (in the 172.18.0.0 range) that docker addressed to the container, but that also resolved to localhost. I've looked into /etc/hosts but this does not show anything related to this. Furthermore, I'm baffled because the mongo-express container is able to connect.
I've tried changing the name of the container, thinking it might be block for some reason due to previous runs or something like that, but this did not resolve the issue
I've tried both explicit links and implicit using dockers internal DNS resolve, but both did not work.
When binding port 27017 to localhost it is able to connect, but because of security and easy configuration via environment variables, I rather have the mongodb instance not bound to localhost.
I've also tried to run this on my local machine and that works as expected, being that both mongoadmin and app are able to connect to the mongodb container. My localmachine runs Docker version 1.12.6, build 78d1802, while the VPS runs on Docker version 17.06.2-ce, build cec0b72, thus a newer version.
Could this be a newly introduced bug? Or am I missing something else? Any help would be appreciated.
Your docker-compose file seems not have linked the app and mongodb container.
You have this:
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"
While I think it should be this:
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
links:
- "mongodb:mongodb"
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"