Gogs + Drone getsockopt: connection refused - docker-compose

In the Gogs/webhooks interface when i click the test delivery button i got this error;
Delivery: Post http://localhost:8000/hook?access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0ZXh0IjoidG9tL2Ryb255IiwidHlwZSI6Imhvb2sifQ.UZdOVW2IdiDcLQzKcnlmlAxkuA8GTZBH634G0K7rggI: dial tcp [::1]:8000: getsockopt: connection refused
This is my docker-compose.yml file
version: '2'
services:
gogs:
image: gogs/gogs
ports:
- 3000:3000
- 22:22
links:
- mysql
mysql:
image: mysql
expose:
- 3306
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_DATABASE=gogs
drone:
image: drone/drone
links:
- gogs
ports:
- 8000:8000
volumes:
- ./drone:/var/lib/drone
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REMOTE_DRIVER=gogs
- REMOTE_CONFIG=http://gogs:3000?open=true
# - PUBLIC_MODE=true

The root cause is that Drone assumes its external address is localhost:8000 because that is how it is being accessed from your browser. Drone therefore configures all Gogs webhooks to use localhost:8000/hook as the callback URL.
The problem here is that Gogs and Drone are running in separate containers on separate networks. This means when Gogs tries to send the hook to drone it sends it to localhost:8000 and fails because Drone is on a separate bridge.
Recommended Fix
The recommended fix is to use DNS or an IP address with Drone and Gogs. If you are running a production installation of either system it is unlikely you will be using localhost, so this seems like reasonable soluation.
For local testing you can also use the local IP address assigned by Docker. You can find your Drone and Gogs IP addresses using Docker inspect:
"Networks": {
"default": {
"IPAddress": "172.18.0.3"
}
}
Why not custom hostnames?
Using custom hostnames, such as http://gogs, is problematic because drone creates ephemeral Docker containers for every build using the default Docker network settings. This means your build environment will use its own isolated network and will not be able to resolve http://gogs
So even if we configured Drone and Gogs to communicate using custom hostnames, the build environment would be unable to resolve the Gogs hostname to clone your repository.

Related

Docker container communication with other container on diffirent host/server

I am having two servers (CentOS8).
On server1 I have mysql-server container and on server2 I have zabbix-front-end i.e zabbix-web-apache-mysql (container name zabbixfrontend).
I am trying to connect to mysql-server from zabbixfrontend container. Getting error
bash-4.4$ mysql -h <MYSQL_SERVER_IP> -P 3306 -uroot -p
Enter password:
ERROR 2002 (HY000): Can't connect to MySQL server on '<MYSQL_SERVER_IP>' (115)
When I do nc from zabbixfrontend container to my mysql-server IP I get "No route to host." error message.
bash-4.4$ nc -zv <MYSQL_SERVER_IP> 3306
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: No route to host.
NOTE : I am successfully do nc from the host machine (server2) mysql-server container.
docker-compose.yml
version: '3.5'
services:
zabbix-web-apache-mysql:
image: zabbix/zabbix-web-apache-mysql:centos-8.0-latest
container_name: zabbixfrontend
#network_mode: host
ports:
- "80:8080"
- "443:8443"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/ssl/apache2:/etc/ssl/apache2:ro
- ./usr/share/zabbix/:/usr/share/zabbix/
env_file:
- .env_db_mysql
- .env_web
secrets:
- MYSQL_USER
- MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD
# zbx_net_frontend:
sysctls:
- net.core.somaxconn=65535
secrets:
MYSQL_USER:
file: ./.MYSQL_USER
MYSQL_PASSWORD:
file: ./.MYSQL_PASSWORD
MYSQL_ROOT_PASSWORD:
file: ./.MYSQL_ROOT_PASSWORD
docker logs zabbixfrontend out as below
** Deploying Zabbix web-interface (Apache) with MySQL database
** Using MYSQL_USER variable from ENV
** Using MYSQL_PASSWORD variable from ENV
********************
* DB_SERVER_HOST: <MYSQL_SERVER_IP>
* DB_SERVER_PORT: 3306
* DB_SERVER_DBNAME: zabbix
********************
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
The nc message is telling the truth: No route to host.
This happens because when you deploy your front-end container in the docker bridge network, its IP address belongs to the 172.18.0.0/16 subnet and you a are trying to reach an the database via an IP address that belongs to a different subnet (10.0.0.0/16).
On the other hand, when you deploy your front-end container on the host network, you no longer face that problem, because now the IP is literally using the IP address of the host machine, 10.0.0.2 and there is no need for a route to be explicitly created to reach 10.0.0.3.
Now the problem you are facing is that you can no longer access the web-ui via the browser. This happens because I assume you kept the ports:" option in your docker-compose.yml and tried to access the service on localhost:80/443. The source and destination ports do not need to be specified if you run the container on the host network. The container will just listen directly on the host on the port that's opened inside the container.
Try to run the front-end container with this config and then access it on localhost:8080 and localhost:8443:
...
network_mode: host
# ports:
# - "80:8080"
# - "443:8443"
volumes:
...
Running containers on the host network is not something that I would usually recommend, but hence your setup is quite special, having one container running on one docker host and another container running in another independent docker host, I assume you don't want create an overlay network and eventually register the two docker hosts to a swarm.

Setting up SMTP server for your app in a container

I have an app which sends email and requires smtp running on port 25. For which I created another container and mapped port 25 from host to container.
That didnt quite work well, as it kept throwing the following error
ERROR: for smtp Cannot start service smtp: driver failed programming external connectivity on endpoint push_smtp_1 (25f260f6185dd34cfdb8fb9956c28187028aaca4d850d7a73acc4c2180c55696): Error starting userland proxy: Bind for 0.0.0.0:25: unexpected error (Failure EADDRINUSE)
Not sure what could be wrong here, following the other posts I tried restarting the docker client, as well as verified that there is nothing else running on port 25 lsof -i:25 Let me know if I am missing something here.
The 2nd part to this question is, what is the Ideal way to deal with smtp server.
Should the smtp server be created within the app container. Came across this blog http://www.tothenew.com/blog/setting-up-sendmail-inside-your-docker-container/
If not (1), then is it better to create a smtp container and map ports.Is so, whats the reason for getting the above error.
Below is how my docker-compose:
version: '3'
services:
push:
image: emailService
ports:
- "9602:9602/tcp"
networks:
- default
build:
context: ./
dockerfile: Dockerfile
args:
- "TARGET=build"
depends_on:
- gearmand
- smtp
smtp:
image: catatnight/postfix:latest
ports:
- "25:25"
networks:
- default
gearmand:
image: <path>/<to>/gearmand:latest
ports:
- "4730:4730/tcp"
networks:
- default
Thanks!
If you want the SMTP server to just be reachable from the other container and not from the outside, no need to map the port.
Using docker-compose, all defined containers will automatically be added to a network in which containers can reach each other by their name (see https://docs.docker.com/compose/networking/). If your custom "default" network is a bridge network, this will work as well.
That means, your SMTP container will directly be reachable at smtp:25 from other containers (i.e. its internal port and internal hostname instead of the host port and publicly routable IP address of your docker host).
Nobody else will be able to use your SMTP server like that. I think this might lead to problems with recipients not accepting the emails sent by it (see https://serverfault.com/q/364473). #David Maze has a point in saying that it's probably better to use a public/official mail provider anyways.
I think the issue is that you have something else on the host that is already listening on that port
Try to find out what ports on the host are listening with https://www.cyberciti.biz/faq/how-do-i-find-out-what-ports-are-listeningopen-on-my-linuxfreebsd-server/
Use some other port on the host:
version: '3'
services:
push:
image: emailService
ports:
- "9602:9602/tcp"
networks:
- default
build:
context: ./
dockerfile: Dockerfile
args:
- "TARGET=build"
depends_on:
- gearmand
- smtp
smtp:
image: catatnight/postfix:latest
ports:
- "2525:25"
networks:
- default
gearmand:
image: <path>/<to>/gearmand:latest
ports:
- "4730:4730/tcp"
networks:
- default

Map port of container without exposing to host in docker-compose

I have
services:
api:
build: .
ports:
- "8080:8080"
superservice:
image: superservice
ports:
- # ?
superservice is very super, but I simply pulled it from the Docker hub and its port cannot be configured when creating a container. The default port is 8080. But that is already in use. How do I change it to 8081? I do NOT want it to be accessible from the host, that is why - "8081:8080" is not desirable.
In this case you would have to change the port superservice is running on by either changing its configuration, or if possible, change the command or entrypoint it runs on start and pass the new port as an argument.
Although, if superservice does not have to be reachable from the host then you should have no problem referencing it as http://superservice:8080 from inside the api container.

How to connect Postgresql Docker Container with another Docker Container

I want to connect mysoft docker container to postgresql docker container.
But i have some errors:
ERROR: for mysoft_db_1 Cannot start service db: driver failed programming external connectivity on endpoint mysoft_db_1 (XXX):
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint mysoft_db_1 (XXX):
Error starting userland proxy: listen tcp 0.0.0.0:5432: bind: address already in use
here is my docker-compose.yml
version: '2'
services:
mysoft:
image: mysoft/mysoft:1.2.3
ports:
- "80:8080"
environment:
- DATABASE_URL=postgres://mysoft:PASSWORD#db/mysoft?sslmode=disable
db:
image: postgresql
environment:
- POSTGRES_USER=mysoft
- POSTGRES_PASSWORD=PASSWORD
- POSTGRES_DB=mysoft
ports:
- 5432:5432
I want use another, already running docker pg server to connect new soft, also one pg docker server, for more projects
Is it possible?
You should add links to the definition of mysoft service in docker-compose.yml. Then your db service will be accessible from mysoft container.
After that your service definition will look like this.
mysoft:
image: mysoft/mysoft:1.2.3
ports:
- "80:8080"
environment:
- DATABASE_URL=postgres://mysoft:PASSWORD#db/mysoft?sslmode=disable
links:
- db
Now about error of binding. Probably, you receive it, because you have a local postgresql running on port 5432 or you already have a running docker container with 5432 port mapped to local machine.
ports:
- 5432:5432
It is used for mapping ports to your local machine. And if you don't need to access container's db from it, just remove it.
I want use another, already running docker pg server to connect new
soft, also one pg docker server, for more projects Is it possible?
Yes, it's possible. Use external_links.
If you choose this option:
Remove the db service and links in mysoft service definition from your docker-compose.yml
Add external_links with correct container name to mysoft service definition.
Update host and port in DATABASE_URL according to the container name and postgresql port in it.
You might want to check of you already have a local postgres running on port 5432? If you do you can not do the ports 5432:5432 but have to expose the inner port to an other outer port e.g. 5555:5432
at least if you are using native docker (running on localhost)...

Docker compose yml static IP addressing

I have such docker-compose.yml (not a full list here):
version: '2'
services:
nginx:
build: ./nginx/
ports:
- 8080:80
links:
- php
volumes_from:
- app
networks:
app_subnet:
ipv4_address: 172.16.1.3
php:
build: ./php/
expose:
- 9000
volumes_from:
- app
networks:
app_subnet:
ipv4_address: 172.16.1.4
networks:
app_subnet:
driver: bridge
ipam:
config:
- subnet: 172.16.1.0/24
gateway: 172.16.1.1
After docker-compose up I got such an error:
User specified IP address is supported only when connecting to
networks with user configured subnets
So I'm creating subnet with docker network create --gateway 172.16.1.1 --subnet 172.16.1.0/24 app_subnet
But this doesn't solve the problem because docker-compose creates the subnet with name dev_app_subnet on the fly. And my subnet is not used - I'm getting the same error.
The main purpose of doing this is to assign static IP for nginx service - open my project web url from etc/hosts file.
[SOLVED] Found the solution. When pointing to the network, we should use flag "external" telling compose that network is already created and should be taken outside (otherwise it will be created on the fly with project prefix):
networks:
app_subnet:
external: true
So, after that docker-compose will attach containers to existing app_subnet
Before that the subnet must be created:
docker network create --gateway 172.16.1.1 --subnet 172.16.1.0/24 app_subnet
In my case is i first run docker-compose up failed, but the network already created, can see using docker network ls.
In this case, just docker-compose down, fix the yml, rerun docker-compose up is fine
It is probable that from previous run of the script the network interface was already created but without the subnet parameter.
For fixing it run
docker network ls -a
and remove the network that is blocking the creation of the service
docker network rm <network interface id>
Adding to the Qiushi's answer, you can use the docker-compose 3.9 version to specify the external network as below.
version: "3.9"
networks:
network1:
external: true
name: etl_subnet
Ref: https://docs.docker.com/compose/compose-file/compose-file-v3/#network-configuration-reference
To be specific if you're using docker stack deploy to deploy a swarm cluster. And you need to specify the scope of the subnet as 'swarm':
docker network create --gateway 172.16.1.1 --subnet 172.16.1.0/24
--scope swarm app_subnet
in the docker-compose.yml, you need to specify the external network as:
networks: default:
external:
name: etl_subnet
Use it as default.