Setting up SMTP server for your app in a container - email

I have an app which sends email and requires smtp running on port 25. For which I created another container and mapped port 25 from host to container.
That didnt quite work well, as it kept throwing the following error
ERROR: for smtp Cannot start service smtp: driver failed programming external connectivity on endpoint push_smtp_1 (25f260f6185dd34cfdb8fb9956c28187028aaca4d850d7a73acc4c2180c55696): Error starting userland proxy: Bind for 0.0.0.0:25: unexpected error (Failure EADDRINUSE)
Not sure what could be wrong here, following the other posts I tried restarting the docker client, as well as verified that there is nothing else running on port 25 lsof -i:25 Let me know if I am missing something here.
The 2nd part to this question is, what is the Ideal way to deal with smtp server.
Should the smtp server be created within the app container. Came across this blog http://www.tothenew.com/blog/setting-up-sendmail-inside-your-docker-container/
If not (1), then is it better to create a smtp container and map ports.Is so, whats the reason for getting the above error.
Below is how my docker-compose:
version: '3'
services:
push:
image: emailService
ports:
- "9602:9602/tcp"
networks:
- default
build:
context: ./
dockerfile: Dockerfile
args:
- "TARGET=build"
depends_on:
- gearmand
- smtp
smtp:
image: catatnight/postfix:latest
ports:
- "25:25"
networks:
- default
gearmand:
image: <path>/<to>/gearmand:latest
ports:
- "4730:4730/tcp"
networks:
- default
Thanks!

If you want the SMTP server to just be reachable from the other container and not from the outside, no need to map the port.
Using docker-compose, all defined containers will automatically be added to a network in which containers can reach each other by their name (see https://docs.docker.com/compose/networking/). If your custom "default" network is a bridge network, this will work as well.
That means, your SMTP container will directly be reachable at smtp:25 from other containers (i.e. its internal port and internal hostname instead of the host port and publicly routable IP address of your docker host).
Nobody else will be able to use your SMTP server like that. I think this might lead to problems with recipients not accepting the emails sent by it (see https://serverfault.com/q/364473). #David Maze has a point in saying that it's probably better to use a public/official mail provider anyways.

I think the issue is that you have something else on the host that is already listening on that port
Try to find out what ports on the host are listening with https://www.cyberciti.biz/faq/how-do-i-find-out-what-ports-are-listeningopen-on-my-linuxfreebsd-server/

Use some other port on the host:
version: '3'
services:
push:
image: emailService
ports:
- "9602:9602/tcp"
networks:
- default
build:
context: ./
dockerfile: Dockerfile
args:
- "TARGET=build"
depends_on:
- gearmand
- smtp
smtp:
image: catatnight/postfix:latest
ports:
- "2525:25"
networks:
- default
gearmand:
image: <path>/<to>/gearmand:latest
ports:
- "4730:4730/tcp"
networks:
- default

Related

Access container from docker-compose using linuxserver/duckdns IP

I was looking for a software like No-IP to dynamically update my IP using a free domain from them like <domain>.zapto.org, but this time for setting up with docker containers. So I found about duckdns and tried setting it up.
Well, perhaps I got it wrong, but as per what I understood, I can create a service within my docker-compose services setting up the linuxserver/duckdns. When I do that, I suppose that I can then access my other services from that same compose using the domain created on duckdns, is that right?
For instance, I got this docker-compose:
version: "3.9"
services:
dns_server:
image: linuxserver/duckdns:version-13f609b7
restart: always
environment:
TOKEN: ${DUCKDNS_TOKEN}
TZ: ${TZ}
SUBDOMAINS: ${DUCKDNS_SUBDOMAINS}
depends_on:
- server
- db
- phpmyadmin
server:
# ...
restart: always
ports:
- "7171:7171"
- "7172:7172"
# ...
command: sh -c "/wait && screen -S tfs ./tfs"
# Database
db:
image: bitnami/mariadb:10.8.7-debian-11-r1
restart: always
ports:
- "3306:3306"
# ...
# phpmyadmin
phpmyadmin:
# ...
image: bitnami/phpmyadmin:5.2.1-debian-11-r1
restart: always
ports:
- "8080:8080"
- "8443:8443"
# ...
That compose gives me these containers running:
When I try to reach my server service by using 127.0.0.1:7171 or localhost:7171, and also access my phpmyadmin by 127.0.0.1:8080, it works, but it doesn't when I try using <mydomain>.duckdns.org:7171 or <mydomain>.duckdns.org:8080
What is wrong?
As I know, when you define the port - "7171:7171" like this it will bound to your localhost 127.0.0.1, which you can access. If you want to allow public access try something like
server:
ports:
- "0.0.0.0:7171:7171"
- "0.0.0.0:7172:7172"
And you can access the port via your Public IP address or hostname of duckDNS.
FYI: Beware of the security risks of exposing the code to the public.

Using https with grafana/caddy on docker compose

I'm trying to understand how to implement https with grafana/caddy in docker compose without a domain name.
Currently, I access grafana via http://xx.xxx.xx.xx:3000/
I would like this to be https, but am struggling to understand how to generate the cert and have it work as expected. I think letsencrypt requires a domain which I don't have.
version: "3"
networks:
monitor-net:
driver: bridge
volumes:
grafana_data: {}
services:
grafana:
image: grafana/grafana:8.4.4
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
environment:
- GF_SECURITY_ADMIN_USER=${GF_ADMIN_USER}
- GF_SECURITY_ADMIN_PASSWORD=${GF_ADMIN_PASS}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: caddy:2.3.0
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${GF_ADMIN_USER}
- ADMIN_PASSWORD=${GF_ADMIN_PASS}
- ADMIN_PASSWORD_HASH=${ADMIN_PASS_HASH}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
I'm assuming I would create a volume on /etc/caddy/certs where I'd store the certificates, but don't know how to generate it for IP only or how it gets recognized by caddy.
Caddy for IP with SSL
By default, Caddy serves all sites over HTTPS.
Caddy serves IP addresses and local/internal hostnames over HTTPS using self-signed certificates that are automatically trusted locally (if permitted).
Examples: localhost, 127.0.0.1
Offical Docs Here
in your Caddyfile you have to add something like this
http://192.168.1.25:3000 {
reverse_proxy grafana_ip:3000
}
It looks like Caddy does not support generating HTTPS certificates for IP addresses. Additionally, Let's Encrypt does not currently support issuing certificates for bare IP addresses.
However, it does appear that ZeroSSL supports generating certificates for IPs. You could try using these instructions to change one or all of your sites to use ZeroSSL, but I wasn't able to get this to work on my test server.
The best option is probably to get a domain that you can point at your server, and then serve it from there.

Docker container getting connection refused from postgres container in docker-compose

I've been beating my head against this for a few days now and I'm finally asking for help after trying to find the solution myself from all over.
I have a docker-compose file that looks like this:
services:
db:
image: ...
container_name: db
ports:
- "8095:5432"
networks:
- mynetwork
springservice:
image: ...
container_name: springservice
depends_on:
- db
ports:
- "8090:8090"
networks:
- mynetwork
environment:
- SPRING_DATASOURCE_URL: jdbc:postgresql://db:8095/dbname
- SPRING_DATASOURCE_USER: user
- SPRING_DATASOURCE_PASSWORD: password
networks:
mynetwork:
driver: bridge
name: mynetwork
Postgres has to be put to another port because we've got 3 postgres containers in that compose, so each get their own port.
Postgres's listen_address is set to "*".
pg_hba is set with "host all 0.0.0.0/0 md5"
Both containers come up, but when I curl from the service container to http://db:8095/ , I get connection refused.
What am I missing here?
Your port mapping is meaningless inside the docker network. This is only a mapping to the host system. Inside the network, the container is always available on its native port.
- SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/dbname
Also note that you don't need to publish the port to access it from inside the network. Doing so for a database can impose security risks. If you can, you should not publish it. That way, it will be only accessible from inside the docker network.

Connect to user-defined network in Docker

I imagine this question has been asked a bunch, but can't seem to find a concise answer...
I have the following Docker Compose:
version: "3.7"
networks:
foo-network:
driver: bridge
services:
foo-pg-db:
image: postgres:9.6.2-alpine
restart: always
volumes:
- ./bootstrap/pg:/docker-entrypoint-initdb.d/
ports:
- 5432:5432
environment:
POSTGRES_USER: admin
networks:
- foo-network
This runs and I'm able to connect to it from my app code with the following URL: "postgres://admin#foo-pg-db/foo", but if I'm trying to connect to this from my Mac, what is the "host" portion of the connection URL? It's not localhost is it?
For me similar thing happened with MySQL container that "localhost" was not identified as valid address for docker container address and using "127.0.0.1" worked.
127.0.0.1 is default address in linux systems, check your hosts file for exact address used by your os
Also, your network has nothing to do with it because you are directly using ports variable :-
ports:
- 5432:5432
This actually binds your container port to host post.
<host_port> : <container_port>

Gogs + Drone getsockopt: connection refused

In the Gogs/webhooks interface when i click the test delivery button i got this error;
Delivery: Post http://localhost:8000/hook?access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0ZXh0IjoidG9tL2Ryb255IiwidHlwZSI6Imhvb2sifQ.UZdOVW2IdiDcLQzKcnlmlAxkuA8GTZBH634G0K7rggI: dial tcp [::1]:8000: getsockopt: connection refused
This is my docker-compose.yml file
version: '2'
services:
gogs:
image: gogs/gogs
ports:
- 3000:3000
- 22:22
links:
- mysql
mysql:
image: mysql
expose:
- 3306
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_DATABASE=gogs
drone:
image: drone/drone
links:
- gogs
ports:
- 8000:8000
volumes:
- ./drone:/var/lib/drone
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REMOTE_DRIVER=gogs
- REMOTE_CONFIG=http://gogs:3000?open=true
# - PUBLIC_MODE=true
The root cause is that Drone assumes its external address is localhost:8000 because that is how it is being accessed from your browser. Drone therefore configures all Gogs webhooks to use localhost:8000/hook as the callback URL.
The problem here is that Gogs and Drone are running in separate containers on separate networks. This means when Gogs tries to send the hook to drone it sends it to localhost:8000 and fails because Drone is on a separate bridge.
Recommended Fix
The recommended fix is to use DNS or an IP address with Drone and Gogs. If you are running a production installation of either system it is unlikely you will be using localhost, so this seems like reasonable soluation.
For local testing you can also use the local IP address assigned by Docker. You can find your Drone and Gogs IP addresses using Docker inspect:
"Networks": {
"default": {
"IPAddress": "172.18.0.3"
}
}
Why not custom hostnames?
Using custom hostnames, such as http://gogs, is problematic because drone creates ephemeral Docker containers for every build using the default Docker network settings. This means your build environment will use its own isolated network and will not be able to resolve http://gogs
So even if we configured Drone and Gogs to communicate using custom hostnames, the build environment would be unable to resolve the Gogs hostname to clone your repository.