Docker compose Amazon ECS - postgresql

As I am quite familiar with docker compose and not so familiar with amazons cloudformation I found it extremely nice to be able to basically run your docker compose files via ecs integration and viola behind the scenes everything you need is created for you. So you get your load balancer (if not already created) and your ecs cluster with your services running and everything is connected and just works. When I started wanting to do a bit more advanced things I ran into a problem that I can't seem to find an answer to online.
I have 2 services in my docker compose, my spring boot web app and my postgres db. I wanted to implement ssl and redirect all traffic to https. After a lot of research and a lot of trial and error I finally got it to work by extending my compose file with x-aws-cloudformation and adding native cloudformation yaml. When doing all of this I was forced to choose an application load balancer over a network load balancer as it operates on layer 7 (http/https). However my problem is that now I have no way of reaching my postgres database and running queries against it via for example intellij. My spring boot app works find and can read/write to my database so that works fine. Before the whole ssl implementation I didn't specify a load balancer in my compose file and so it gave me a network load balancer every time I ran my compose file. Then I could connect to my database via intellij and run queries. I have tried adding an inbound rule on my security group that basically allows all inbound traffic to my database via 5432 but that didn't help. I may not be setting the correct host when applying my connection details in intellij but I have tried using the following:
dns name of load balancer
ip-adress of load balancer
public ip of my postgres db task (launch type: fargate)
I would just like to simply reach my database and run queries against it even though it is running inside aws ecs cluster behind an application load balancer. Is there a way of achieving what I am trying to do? Or do I have to have 2 separate load balancers (one application LB and one network LB)?
Here is my docker-compose file(I have omitted a few irrelevant env variables):
version: "3.9"
x-aws-loadbalancer: arn:my-application-load-balancer
services:
my-web-app:
build:
context: .
image: hub/my-web-app
x-aws-pull_credentials: xxxxxxxx
container_name: my-app-name
ports:
- "80:80"
networks:
- my-app-network
depends_on:
- postgres
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 2048M
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/my-db?currentSchema=my-db_schema
- SPRING_DATASOURCE_USERNAME=dbpass
- SPRING_DATASOURCE_PASSWORD=dbpass
- SPRING_DATASOURCE_DRIVER-CLASS-NAME=org.postgresql.Driver
- SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.PostgreSQLDialect
postgres:
build:
context: docker/database
image: hub/my-db
container_name: my-db
networks:
- my-app-network
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 2048M
environment:
- POSTGRES_USER=dbpass
- POSTGRES_PASSWORD=dbpass
- POSTGRES_DB=my-db
networks:
my-app-network:
name: my-app-network
x-aws-cloudformation:
Resources:
MyWebAppTCP80TargetGroup:
Properties:
HealthCheckPath: /actuator/health
Matcher:
HttpCode: 200-499
MyWebAppTCP80Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Protocol: HTTP
Port: 80
LoadBalancerArn: xxxxx
DefaultActions:
- Type: redirect
RedirectConfig:
Port: 443
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
Protocol: HTTPS
StatusCode: HTTP_301
MyWebAppTCP443Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Protocol: HTTPS
Port: 443
LoadBalancerArn: xxxxxxxxx
Certificates:
- CertificateArn: "xxxxxxxxxx"
DefaultActions:
- Type: forward
ForwardConfig:
TargetGroups:
- TargetGroupArn:
Ref: MyWebAppTCP80TargetGroup
MyWebAppTCP80RedirectRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
ListenerArn:
Ref: MyWebAppTCP80Listener
Priority: 1
Conditions:
- Field: host-header
HostHeaderConfig:
Values:
- "*.my-app.com"
- "www.my-app.com"
- "my-app.com"
Actions:
- Type: redirect
RedirectConfig:
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
Port: 443
Protocol: HTTPS
StatusCode: HTTP_301

Related

Unable to see services with traefik

I'm a beginner and Im a bit confused about how traefik works...
I want to use the app freqtrade (trading bot) as a docker service and replicate it with different type of configuration, if you have 5min you can go check this guy I want to do the same thing...
But I don't understant why I can't see my app running with traefik :
What I did :
Configure my domain to my server like that :
server config
And on this machine I create a docker swarm and the treafik service with this tutorial and then, my docker compose file look like that :
```
version: '3.3'
services:
traefik:
# Use the latest v2.2.x Traefik image available
image: traefik:v2.2
ports:
# Listen on port 80, default for HTTP, necessary to redirect to HTTPS
- 80:80
# Listen on port 443, default for HTTPS
- 443:443
networks:
- traefik-public
deploy:
placement:
constraints:
# Make the traefik service run only on the node with this label
# as the node with it has the volume for the certificates
- node.labels.traefik-public.traefik-public-certificates == true
labels:
# Enable Traefik for this service, to make it available in the public network
- traefik.enable=true
# Use the traefik-public network (declared below)
- traefik.docker.network=traefik-public
# Use the custom label "traefik.constraint-label=traefik-public"
# This public Traefik will only use services with this label
# That way you can add other internal Traefik instances per stack if needed
- traefik.constraint-label=traefik-public
# admin-auth middleware with HTTP Basic auth
# Using the environment variables USERNAME and HASHED_PASSWORD
- traefik.http.middlewares.admin-auth.basicauth.users=${USERNAME?Variable not set}:${HASHED_PASSWORD?Variable not set}
# https-redirect middleware to redirect HTTP to HTTPS
# It can be re-used by other stacks in other Docker Compose files
- traefik.http.middlewares.https-redirect.redirectscheme.scheme=https
- traefik.http.middlewares.https-redirect.redirectscheme.permanent=true
# traefik-http set up only to use the middleware to redirect to https
# Uses the environment variable DOMAIN
- traefik.http.routers.traefik-public-http.rule=Host(`${DOMAIN?Variable not set}`)
- traefik.http.routers.traefik-public-http.entrypoints=http
- traefik.http.routers.traefik-public-http.middlewares=https-redirect
# traefik-https the actual router using HTTPS
# Uses the environment variable DOMAIN
- traefik.http.routers.traefik-public-https.rule=Host(`${DOMAIN?Variable not set}`)
- traefik.http.routers.traefik-public-https.entrypoints=https
- traefik.http.routers.traefik-public-https.tls=true
# Use the special Traefik service api#internal with the web UI/Dashboard
- traefik.http.routers.traefik-public-https.service=api#internal
# Use the "le" (Let's Encrypt) resolver created below
- traefik.http.routers.traefik-public-https.tls.certresolver=le
# Enable HTTP Basic auth, using the middleware created above
- traefik.http.routers.traefik-public-https.middlewares=admin-auth
# Define the port inside of the Docker service to use
- traefik.http.services.traefik-public.loadbalancer.server.port=8080
volumes:
# Add Docker as a mounted volume, so that Traefik can read the labels of other services
- /var/run/docker.sock:/var/run/docker.sock:ro
# Mount the volume to store the certificates
- traefik-public-certificates:/certificates
command:
# Enable Docker in Traefik, so that it reads labels from Docker services
- --providers.docker
# Add a constraint to only use services with the label "traefik.constraint-label=traefik-public"
- --providers.docker.constraints=Label(`traefik.constraint-label`, `traefik-public`)
# Do not expose all Docker services, only the ones explicitly exposed
- --providers.docker.exposedbydefault=false
# Enable Docker Swarm mode
- --providers.docker.swarmmode
# Create an entrypoint "http" listening on port 80
- --entrypoints.http.address=:80
# Create an entrypoint "https" listening on port 443
- --entrypoints.https.address=:443
# Create the certificate resolver "le" for Let's Encrypt, uses the environment variable EMAIL
- --certificatesresolvers.le.acme.email=${EMAIL?Variable not set}
# Store the Let's Encrypt certificates in the mounted volume
- --certificatesresolvers.le.acme.storage=/certificates/acme.json
# Use the TLS Challenge for Let's Encrypt
- --certificatesresolvers.le.acme.tlschallenge=true
# Enable the access log, with HTTP requests
- --accesslog
# Enable the Traefik log, for configurations and errors
- --log
# Enable the Dashboard and API
- --api
volumes:
# Create a volume to store the certificates, there is a constraint to make sure
# Traefik is always deployed to the same Docker node with the same volume containing
# the HTTPS certificates
traefik-public-certificates:
networks:
traefik-public:
driver: overlay
attachable: true
```
And deploy it :
docker stack deploy -c traefik.yml traefik
After that traefik works fine. Why I can't see the port 8080 in my entrypoint ? is it important for others services ?
Entrypoint traefik
I try to disable the firewall in configuration of the server and also do ufw allow 8080 but nothing change...
I create my a application like I create traefik service with this docker-compose file :
---
version: '3'
networks:
traefik_traefik-public:
external: true
services:
freqtrade:
image: freqtradeorg/freqtrade:stable
# image: freqtradeorg/freqtrade:develop
# Use plotting image
# image: freqtradeorg/freqtrade:develop_plot
# Build step - only needed when additional dependencies are needed
# build:
# context: .
# dockerfile: "./docker/Dockerfile.custom"
restart: unless-stopped
container_name: freqtrade
volumes:
- "./user_data:/freqtrade/user_data"
# Expose api on port 8080 (localhost only)
# Please read the https://www.freqtrade.io/en/stable/rest-api/ documentation
# before enabling this.
networks:
- traefik_traefik-public
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
delay: 5s
command: >
trade
--logfile /freqtrade/user_data/logs/freqtrade.log
--db-url sqlite:////freqtrade/user_data/tradesv3.sqlite
--config /freqtrade/user_data/config.json
--strategy SampleStrategy
labels:
- traefik.http.routers.bot001.tls=true'
- traefik.http.routers.bot001.rule=Host(`bot001.bots.lordgoliath.com`)'
- traefik.http.services.bot001.loadbalancer.server.port=8080'
and this is a part of the configuation file of the bot (to access to the UI)
"api_server": {
"enabled": true,
"enable_openapi": true,
"listen_ip_address": "0.0.0.0",
"listen_port": 8080,
"verbosity": "info",
"jwt_secret_key": "somethingrandom",
"CORS_origins": ["https://bots.lordgoliath.com"],
"username": "api",
"password": "api"
},
then :
docker stack deploy -c docker-compose.yml freqtrade
So I have that :
goliath#localhost:~/freqtrade_test/user_data$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
nkvpjjztjibg freqtrade_freqtrade replicated 1/1 freqtradeorg/freqtrade:stable
6qryu28ute9i traefik_traefik replicated 1/1 traefik:v2.2 *:80->80/tcp, *:443->443/tcp
I see the bot running with the command docker service logs freqtrade_freqtrade but
when I try to go on my domain to see it have only the Traefik dashboard and can't see anything else running.
traefik http
traefik https
how I can see my app freqtrade running ? how can I access to the bot UI via my domain ?
Thanks !
Sorry for my bad English I hope this is clear enough to understand my problem
UPDATE
docker service inspect --pretty freqtrade_freqtrade
ID: o6bpaso69i9n6etybtj09xsqi
Name: ft1_freqtrade
Labels:
com.docker.stack.image=freqtradeorg/freqtrade:stable
com.docker.stack.namespace=ft1
Service Mode: Replicated
Replicas: 1
Placement:
Constraints: [node.role == manager]
UpdateConfig:
Parallelism: 1
On failure: pause
Monitoring Period: 5s
Max failure ratio: 0
Update order: stop-first
RollbackConfig:
Parallelism: 1
On failure: pause
Monitoring Period: 5s
Max failure ratio: 0
Rollback order: stop-first
ContainerSpec:
Image: freqtradeorg/freqtrade:stable#sha256:3b2f2acb5b9cfedaa7b07cf56af01d1a750bce4c3054bdbaf40ac27935c984eb
Args: trade --logfile /freqtrade/user_data/logs/freqtrade.log --db-url sqlite:////freqtrade/user_data/tradesv3.sqlite --config /freqtrade/user_data/config.json --strategy SampleStrategy
Mounts:
Target: /freqtrade/user_data
Source: /home/goliath/freqtrade_test/user_data
ReadOnly: false
Type: bind
Resources:
Networks: traefik_traefik-public
Endpoint Mode: vip
UPDATE NEW docker-compose.yml
---
version: '3'
networks:
traefik_traefik-public:
external: true
services:
freqtrade:
image: freqtradeorg/freqtrade:stable
# image: freqtradeorg/freqtrade:develop
# Use plotting image
# image: freqtradeorg/freqtrade:develop_plot
# Build step - only needed when additional dependencies are needed
# build:
# context: .
# dockerfile: "./docker/Dockerfile.custom"
restart: unless-stopped
container_name: freqtrade
volumes:
- "./user_data:/freqtrade/user_data"
# Expose api on port 8080 (localhost only)
# Please read the https://www.freqtrade.io/en/stable/rest-api/ documentation
# before enabling this.
networks:
- traefik_traefik-public
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
delay: 5s
labels:
- 'traefik.enabled=true'
- 'traefik.http.routers.bot001.tls=true'
- 'traefik.http.routers.bot001.rule=Host(`bot001.bots.lordgoliath.com`)'
- 'traefik.http.services.bot001.loadbalancer.server.port=8080'
command: >
trade
--logfile /freqtrade/user_data/logs/freqtrade.log
--db-url sqlite:////freqtrade/user_data/tradesv3.sqlite
--config /freqtrade/user_data/config.json
--strategy SampleStrategy
UPDATE docker network ls
goliath#localhost:~/freqtrade_test$ docker network ls
NETWORK ID NAME DRIVER SCOPE
003e00401b5d bridge bridge local
9f3d9a222928 docker_gwbridge bridge local
09a33afad0c9 host host local
r4u268yenm5u ingress overlay swarm
bed40e4a5c62 none null local
qo9w45gitke5 traefik_traefik-public overlay swarm
This is the minimal config you need to integrate in order to see the traefik dashboard on localhost:8080
version: "3.9"
services:
traefik:
image: traefik:latest
command: |
--api.insecure=true
ports:
- 8080:8080
Then, your minimal configuration to get traefik to route example.com to itself:
version: "3.9"
networks:
public:
attachable: true
name: traefik
services:
traefik:
image: traefik:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
command: |
--api.insecure=true
--providers.docker.exposedbydefault=false
--providers.docker.swarmmode
--providers.docker.network=traefik
ports:
- 80:80
networks:
- public
deploy:
labels:
traefik.enable: "true"
traefik.http.routers.traefik.rule: Host(`example.com`)
traefik.http.services.traefik.loadbalancer.server.port: 8080
Now, minimal https support - using Traefik self signed certs to start with. Note that we configure tls on the https entrypoint, which means traefik implicitly creates http and https variants for each router.
version: "3.9"
networks:
public:
attachable: true
name: traefik
services:
traefik:
image: traefik:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
command: |
--api.insecure=true
--providers.docker.exposedbydefault=false
--providers.docker.swarmmode
--providers.docker.network=traefik
--entrypoints.http.address=:80
--entrypoints.https.address=:443
--entrypoints.https.http.tls=true
deploy:
placement:
constraints:
- node.role == manager
ports:
# - 8080:8080
- 80:80
- 443:443
networks:
- public
deploy:
labels:
traefik.enable: "true"
traefik.http.routers.traefik.rule: Host(`example.com`)
traefik.http.services.traefik.loadbalancer.server.port: 8080
At this point, gluing in your le config should be simple.
Your freqtrade stack compose would need to be this. If this is a single node swarm, just omit the placement constraints, but when the swarm is large enough to have workers, then tasks that don't need to be on managers should explicitly be kept on workers.
Traefik needs to talk to the swarm api over the docker socket, which is on manager nodes only, which is why it must be node.role==manager.
version: "3.9"
networks:
traefik:
external: true
services:
freqtrade:
image: freqtradeorg/freqtrade:stable
command: ...
volumes: ...
networks:
- traefik
deploy:
placement:
constraints:
- node.role == worker
restart_policy:
max_attempts: 5
labels:
traefik.enabled: "true"
traefik.http.routers.bot001.rule: Host(`bot001.bots.lordgoliath.com`)
traefik.http.services.bot001.loadbalancer.server.port: 8080

Serving MLFlow artifacts through `--serve-artifacts` without passing credentials

A new version of MLFlow (1.23) provided a --serve-artifacts option (via this pull request) along with some example code. This should allow me to simplify the rollout of a server for data scientists by only needing to give them one URL for the tracking server, rather than a URI for the tracking server, URI for the artifacts server, and a username/password for the artifacts server. At least, that's how I understand it.
A complication that I have is that I need to use podman instead of docker for my containers (and without relying on podman-compose). I ask that you keep those requirements in mind; I'm aware that this is an odd situation.
What I did before this update (for MLFlow 1.22) was to create a kubernetes play yaml config, and I was successfully able to issue a podman play kube ... command to start a pod and from a different machine successfully run an experiment and save artifacts after setting the appropriate four env variables. I've been struggling with getting things working with the newest version.
I am following the docker-compose example provided here. I am trying a (hopefully) simpler approach. The following is my kubernetes play file defining a pod.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-01-14T19:07:15Z"
labels:
app: mlflowpod
name: mlflowpod
spec:
containers:
- name: minio
image: quay.io/minio/minio:latest
ports:
- containerPort: 9001
hostPort: 9001
- containerPort: 9000
hostPort: 9000
resources: {}
tty: true
volumeMounts:
- mountPath: /data
name: minio-data
args:
- server
- /data
- --console-address
- :9001
- name: mlflow-tracking
image: localhost/mlflow:latest
ports:
- containerPort: 80
hostPort: 8090
resources: {}
tty: true
env:
- name: MLFLOW_S3_ENDPOINT_URL
value: http://127.0.0.1:9000
- name: AWS_ACCESS_KEY_ID
value: minioadmin
- name: AWS_SECRET_ACCESS_KEY
value: minioadmin
command: ["mlflow"]
args:
- server
- -p
- 80
- --host
- 0.0.0.0
- --backend-store-uri
- sqlite:///root/store.db
- --serve-artifacts
- --artifacts-destination
- s3://mlflow
- --default-artifact-root
- mlflow-artifacts:/
# - http://127.0.0.1:80/api/2.0/mlflow-artifacts/artifacts/experiments
- --gunicorn-opts
- "--log-level debug"
volumeMounts:
- mountPath: /root
name: mlflow-data
volumes:
- hostPath:
path: ./minio
type: Directory
name: minio-data
- hostPath:
path: ./mlflow
type: Directory
name: mlflow-data
status: {}
I start this with podman play kube mlflowpod.yaml. On the same machine (or a different one, it doesn't matter), I have cloned and installed mlflow into a virtual environment. From that virtual environment, I set an environmental variable MLFLOW_TRACKING_URI to <name-of-server>:8090. I then run the example.py file in the mlflow_artifacts example directory. I get the following response:
....
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Which seems like the client needs the server credentials to minIO, which I thought the proxy was supposed to take care of.
If I also provide the env variables
$env:MLFLOW_S3_ENDPOINT_URL="http://<name-of-server>:9000/"
$env:AWS_ACCESS_KEY_ID="minioadmin"
$env:AWS_SECRET_ACCESS_KEY="minioadmin"
Then things work. But that kind of defeats the purpose of the proxy...
What is it about the proxy setup via kubernates play yaml and podman that is going wrong?
Just in case anyone stumbles upon this, I had same issue based on your description. However the problem on my side was that I was that I tried to test this with a preexisting experiment (default), and I did not create new one, so the old setting carried over, thus resulting in MLFlow trying to use s3 trough credentials and not https.
Hope this helps at least some of you out there.

Bad Gateway with Traefik and Docker Compose

I'm trying to deploy a React + FastApi + Postgres application on docker compose with Traefik as the reverse proxy. I'm running into issues with Bad Gateway errors. Running my FastAPI locally runs it on port 8888 and exposes the path /docs to view the api documentation. I'd like to eventually have the application running on example.local with the docs available on example.local/api/docs. My docker-compose.yaml is as follows (loosely based on this one):
version: '3.8'
services:
proxy:
image: traefik:v2.4
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- '80:80'
- '8080:8080'
- '443:443'
command:
- --providers.docker
- --api.insecure=true
- --providers.docker.exposedbydefault=false
- --providers.docker.network=web
- --entrypoints.web.address=:80
labels:
- traefik.enable=true
- traefik.http.routers.example-proxy-http.rule=Host(`example.local`)
- traefik.http.routers.example-proxy-http.entrypoints=web
- traefik.http.services.example-proxy.loadbalancer.server.port=80
backend:
build:
context: ./backend
dockerfile: Dockerfile
command: python app/main.py
volumes:
- ./backend/app:/app
env_file:
- .env
networks:
- web
- backend
labels:
- traefik.enable=true
- traefik.http.routers.example-backend-http.rule=PathPrefix(`api/docs`)
- traefik.http.routers.example-backend-http.entrypoints=web
- traefik.http.services.example-backend.loadbalancer.server.port=8888
networks:
web:
external: true
backend:
external: false
I've added 127.0.0.1 example.local to my /etc/hosts file.
From reading around it seems like Bad Gateway errors tend to occur from traefik and related services not being on the same network, or traefik routing traffic to the wrong port on the service container. However if I set ports: - '8888:8888' in my backend service I can access the docs from localhost:8888/docs so I'm pretty sure 8888 is the correct port for the backend loadbalancer. From what I can see traefik and the backend service are on the same network too and I've set it as the default traefik network with --providers.docker.network=web. Interestingly if I visit localhost/api/docs in my browser I'm served up a page from FastAPI. So it could be an issue with my traefik http router labels? I'm quite new to traefik and proxies so would appreciate any help or guidance, thanks!
UPDATE
If I specify the host for the backend by adding
- traefik.http.routers.infilmation-backend-http.rule=Host(`example.local`) && PathPrefix(`/docs`)
to the backend service labels, then visiting example.local/docs does serve up page from FastApi. So I guess my question would be what is the best way of setting up a host for this application? Is there a way I can specify a default host for all services then any PathPrefix rules would be in relation to that host?

Debugging Traefik when the Site Cannot Be Reached from outside Company's Intranet

Using docker-compose I have deployed a web application that uses Traefik as the reverse proxy, listening on port 80. This works without problem when I'm inside my company's intranet. Outside of the intranet, however, I get a 'site cannot be reached' response. Pinging the address from outside shows that the address is reachable and port 80 is open.
I've also tried to use segments in my Traefik configuration to route both the internal and external hostname I have been provided but this has no effect:
version: "3.5"
services:
test:
image: emilevauge/whoami
deploy:
labels:
traefik.enable: "true"
traefik.foo.frontend.rule: "Host:${HOSTNAME};PathPrefixStrip:/test"
traefik.bar.frontend.rule: "Host:${EXTERNAL_HOSTNAME};PathPrefixStrip:/test"
traefik.port: 80
networks:
- frontend
...
I have configured the access logs to see if my requests are reaching Traefik, can anyone advise me what I should be looking for and how to filter the huge amount of text produced to find it? This is my Traefik setup configuration:
version: '3.5'
services:
traefik:
image: traefik:alpine
command: |-
--entryPoints="Name:http Address::80"
--entryPoints="Name:https Address::443 TLS"
--defaultentrypoints="http,https"
--acme
--acme.acmelogging="true"
--acme.domains="${HOSTNAME}"
--acme.domains="${EXTERNAL_HOSTNAME}"
--acme.email="${ACME_EMAIL}"
--acme.entrypoint="https"
--acme.httpchallenge
--acme.httpchallenge.entrypoint="http"
--acme.storage="/opt/traefik/acme.json"
--acme.onhostrule="true"
--docker
--docker.swarmmode
--docker.domain="${HOSTNAME}"
--docker.network="frontend"
--docker.watch
--api
--api.statistics
--logLevel="DEBUG"
networks:
- frontend

docker-compose v3 services on several networks

I use docker-compose v3 file to deploy services on docker swarm mode cluster.
My services are elasticsearch and kibana. I want that kibana was accessible from outside, and that elasticsearch could be accessed by kibana and was not visible and accessible from outside. In order to reach this kind of behavior, I created 2 overlay networks called 'external' and 'elk_only'. I put elasticseach on 'elk_only' network and I placed kibana under 'elk_only' and 'external' networks. And the things do not work. When I go to localhost:5601 (kibana's port), I get a message: 'localhost refused to connect'.
The command I use to deploy services is
docker stack deploy --compose-file=elastic-compose.yml elkstack
The content of elastic-compose.yml file:
version: "3"
services:
elasticsearch:
image: elasticsearch:5.1
expose:
- 9200
networks:
- elk_only
deploy:
restart_policy:
condition: on-failure
kibana:
image: kibana:5.1
ports:
- 5601:5601
volumes:
- ./kibana/kibana.yml:/etc/kibana/kibana.yml
depends_on:
- elasticsearch
networks:
- external
- elk_only
deploy:
restart_policy:
condition: on-failure
networks:
elk_only:
driver: overlay
external:
driver: overlay
The content of kibana.yml is
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://elkstack_elasticsearch:9200"
Could you help me to solve this problem and understand what's going wrong? Any help would be appreciated!