Access container from docker-compose using linuxserver/duckdns IP - docker-compose

I was looking for a software like No-IP to dynamically update my IP using a free domain from them like <domain>.zapto.org, but this time for setting up with docker containers. So I found about duckdns and tried setting it up.
Well, perhaps I got it wrong, but as per what I understood, I can create a service within my docker-compose services setting up the linuxserver/duckdns. When I do that, I suppose that I can then access my other services from that same compose using the domain created on duckdns, is that right?
For instance, I got this docker-compose:
version: "3.9"
services:
dns_server:
image: linuxserver/duckdns:version-13f609b7
restart: always
environment:
TOKEN: ${DUCKDNS_TOKEN}
TZ: ${TZ}
SUBDOMAINS: ${DUCKDNS_SUBDOMAINS}
depends_on:
- server
- db
- phpmyadmin
server:
# ...
restart: always
ports:
- "7171:7171"
- "7172:7172"
# ...
command: sh -c "/wait && screen -S tfs ./tfs"
# Database
db:
image: bitnami/mariadb:10.8.7-debian-11-r1
restart: always
ports:
- "3306:3306"
# ...
# phpmyadmin
phpmyadmin:
# ...
image: bitnami/phpmyadmin:5.2.1-debian-11-r1
restart: always
ports:
- "8080:8080"
- "8443:8443"
# ...
That compose gives me these containers running:
When I try to reach my server service by using 127.0.0.1:7171 or localhost:7171, and also access my phpmyadmin by 127.0.0.1:8080, it works, but it doesn't when I try using <mydomain>.duckdns.org:7171 or <mydomain>.duckdns.org:8080
What is wrong?

As I know, when you define the port - "7171:7171" like this it will bound to your localhost 127.0.0.1, which you can access. If you want to allow public access try something like
server:
ports:
- "0.0.0.0:7171:7171"
- "0.0.0.0:7172:7172"
And you can access the port via your Public IP address or hostname of duckDNS.
FYI: Beware of the security risks of exposing the code to the public.

Related

Using https with grafana/caddy on docker compose

I'm trying to understand how to implement https with grafana/caddy in docker compose without a domain name.
Currently, I access grafana via http://xx.xxx.xx.xx:3000/
I would like this to be https, but am struggling to understand how to generate the cert and have it work as expected. I think letsencrypt requires a domain which I don't have.
version: "3"
networks:
monitor-net:
driver: bridge
volumes:
grafana_data: {}
services:
grafana:
image: grafana/grafana:8.4.4
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
environment:
- GF_SECURITY_ADMIN_USER=${GF_ADMIN_USER}
- GF_SECURITY_ADMIN_PASSWORD=${GF_ADMIN_PASS}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: caddy:2.3.0
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${GF_ADMIN_USER}
- ADMIN_PASSWORD=${GF_ADMIN_PASS}
- ADMIN_PASSWORD_HASH=${ADMIN_PASS_HASH}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
I'm assuming I would create a volume on /etc/caddy/certs where I'd store the certificates, but don't know how to generate it for IP only or how it gets recognized by caddy.
Caddy for IP with SSL
By default, Caddy serves all sites over HTTPS.
Caddy serves IP addresses and local/internal hostnames over HTTPS using self-signed certificates that are automatically trusted locally (if permitted).
Examples: localhost, 127.0.0.1
Offical Docs Here
in your Caddyfile you have to add something like this
http://192.168.1.25:3000 {
reverse_proxy grafana_ip:3000
}
It looks like Caddy does not support generating HTTPS certificates for IP addresses. Additionally, Let's Encrypt does not currently support issuing certificates for bare IP addresses.
However, it does appear that ZeroSSL supports generating certificates for IPs. You could try using these instructions to change one or all of your sites to use ZeroSSL, but I wasn't able to get this to work on my test server.
The best option is probably to get a domain that you can point at your server, and then serve it from there.

Docker container getting connection refused from postgres container in docker-compose

I've been beating my head against this for a few days now and I'm finally asking for help after trying to find the solution myself from all over.
I have a docker-compose file that looks like this:
services:
db:
image: ...
container_name: db
ports:
- "8095:5432"
networks:
- mynetwork
springservice:
image: ...
container_name: springservice
depends_on:
- db
ports:
- "8090:8090"
networks:
- mynetwork
environment:
- SPRING_DATASOURCE_URL: jdbc:postgresql://db:8095/dbname
- SPRING_DATASOURCE_USER: user
- SPRING_DATASOURCE_PASSWORD: password
networks:
mynetwork:
driver: bridge
name: mynetwork
Postgres has to be put to another port because we've got 3 postgres containers in that compose, so each get their own port.
Postgres's listen_address is set to "*".
pg_hba is set with "host all 0.0.0.0/0 md5"
Both containers come up, but when I curl from the service container to http://db:8095/ , I get connection refused.
What am I missing here?
Your port mapping is meaningless inside the docker network. This is only a mapping to the host system. Inside the network, the container is always available on its native port.
- SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/dbname
Also note that you don't need to publish the port to access it from inside the network. Doing so for a database can impose security risks. If you can, you should not publish it. That way, it will be only accessible from inside the docker network.

Connecting to Postgres Docker server - authentication failed

I have a PostgreSQL container set up that I can successfully connect to with Adminer but I'm getting an authentication error when trying to connect via something like DBeaver using the same credentials.
I have tried exposing port 5432 in the Dockerfile and can see on Windows for docker the port being correctly binded. I'm guessing that because it is an authentication error that the issue isn't that the server can not be seen but with the username or password?
Docker Compose file and Dockerfile look like this.
version: "3.7"
services:
db:
build: ./postgresql
image: postgresql
container_name: postgresql
restart: always
environment:
- POSTGRES_DB=trac
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
ports:
- 5432:5432
adminer:
image: adminer
restart: always
ports:
- 8080:8080
nginx:
build: ./nginx
image: nginx_db
container_name: nginx_db
restart: always
ports:
- "8004:8004"
- "8005:8005"
Dockerfile: (Dockerfile will later be used to copy ssl certs and keys)
FROM postgres:9.6
EXPOSE 5432
Wondering if there is something else I should be doing to enable this to work via some other utility?
Any help would be great.
Thanks in advance.
Update:
Tried accessing the database through the IP of the postgresql container 172.28.0.3 but the connection times out which suggests that PostgreSQL is correctly listening on 0.0.0.0:5432 and for some reason the user and password are not usable outside of Docker even from the host machine using localhost.
Check your pg_hba.conf file in the Postgres data folder.
The default configuration is that you can only login from localhost (which I assume Adminer is doing) but not from external IPs.
In order to allow access from all external addresses vi password authentication, add the following line to your pg_hba.conf:
host all all * md5
Then you can connect to your postgres DB running in the docker container from outside, given you expose the Port (5432)
Use the command docker container inspect ${container_number}, this will tell you which IPaddress:ports are exposed external to the container.
The command 'docker container ls' will help identify the 'container number'
After updating my default db_name, I also had to update the docker-compose myself by explicitly exposing the ports as the OP did
db:
image: postgres:13-alpine
volumes:
- dev-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=devdb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
ports:
- 5432:5432
But the key here was restarting the server! DBeaver has connected to localhost:5432 :)

Accessing postgres data in docker-compose network

I'm having trouble accessing a database created from a docker-compose file.
Given the following compose file, I should be able to connect to it from java using something like:
jdbc:postgresql://eprase:eprase#database:7000/eprase
However, the connection is rejected. I can't even use PGAdmin to connect it using the same details to create a new server.
I've entered the database container and ran psql commands to verify that the eprase user and database have been created according to postgres Docker documentation, everything seems fine. I can't tell if the problem is within the database container or something I need to change in the compose network.
The client & server services can largely be ignored, the server is a java based web API and the client is an Angular app.
Compose file:
version: "3"
services:
client:
image: eprase/client:latest
build: ./client/eprase-app
networks:
api:
ports:
- "5000:80"
tty: true
depends_on:
- server
server:
image: eprase/server:latest
build: ./server
networks:
api:
ports:
- "6000:8080"
depends_on:
- database
database:
image: postgres:9
volumes:
- "./database/data:/var/lib/postgresql/data"
environment:
- "POSTGRES_USER=eprase"
- "POSTGRES_PASSWORD=eprase"
- "POSTGRES_DB=eprase"
networks:
api:
ports:
- "7000:5432"
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=admin#eprase.com"
- "PGADMIN_DEFAULT_PASSWORD=eprase"
networks:
api:
ports:
- "8000:80"
depends_on:
- database
networks:
api:
The PostgreSQL database is listening on container port 5432. The 7000:5432 line is mapping host port 7000 to container port 5432. That allows you to connect to the database on port 7000. But, your services on a common network (api) should communicate with each other via the container ports.
So, from the perspective of the containers for the client and server services, the connection string should be:
jdbc:postgresql://eprase:eprase#database:5432/eprase

docker link resolves to localhost

I'm stuck on a very strange docker problem that I've not encountered before. What I want to do is to use docker-compose to make my application available from the internet. It's currently running on a instance on DigitalOcean and I'm currently working with the following docker-compose.yml:
version: '2.2'
services:
mongodb:
image: mongo:3.4
volumes:
- ./mongo:/data/db
ports:
- "27017"
mongoadmin: # web UI for mongo
image: mongo-express
ports:
- "8081:8081"
links:
- "mongodb:mongo"
environment:
- ME_CONFIG_OPTIONS_EDITORTHEME=ambiance
- ME_CONFIG_BASICAUTH_USERNAME=user
- ME_CONFIG_BASICAUTH_PASSWORD=pass
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
links:
- "mongodb:mongodb"
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"
Mongoadmin connects properly and is able to connect to the database, while the database itself cannot be connected to from outside the host.
The problem is that the app won't connect to the right address. It is a express server using mongoose to connect to the database. Before connecting I'm logging the url it will connect to. In my config.js I've listed mongodb://mongodb/project, but this is resolved to localhost thus resulting in MongoError: failed to connect to server [localhost:27017] on first connect. The name of the container is resolved, but not to the proper address.
I've tried to connect to the IP (in the 172.18.0.0 range) that docker addressed to the container, but that also resolved to localhost. I've looked into /etc/hosts but this does not show anything related to this. Furthermore, I'm baffled because the mongo-express container is able to connect.
I've tried changing the name of the container, thinking it might be block for some reason due to previous runs or something like that, but this did not resolve the issue
I've tried both explicit links and implicit using dockers internal DNS resolve, but both did not work.
When binding port 27017 to localhost it is able to connect, but because of security and easy configuration via environment variables, I rather have the mongodb instance not bound to localhost.
I've also tried to run this on my local machine and that works as expected, being that both mongoadmin and app are able to connect to the mongodb container. My localmachine runs Docker version 1.12.6, build 78d1802, while the VPS runs on Docker version 17.06.2-ce, build cec0b72, thus a newer version.
Could this be a newly introduced bug? Or am I missing something else? Any help would be appreciated.
Your docker-compose file seems not have linked the app and mongodb container.
You have this:
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"
While I think it should be this:
app:
image: project/name:0.0.1
volumes:
- ./project:/usr/src/app
working_dir: /usr/src/app
links:
- "mongodb:mongodb"
environment:
- NODE_ENV=production
command: ["npm", "start"]
ports:
- "3000:3000"