my goal is to get multiple PHP services running. So that I can use the same framework I would copy the code from framework to each service (1 & 2).
tree
├── Framework
│ └── frw.class.php
├── CodeService1
│ └── index.php (rescue frw.class.php)
├── CodeService2
│ └── index.php (rescue frw.class.php)
├── docker-compose.yml
├── nginx
├──conf
└── myapp.conf
version: '2'
services:
phpfpm:
image: 'bitnami/php-fpm:8.0.2'
container_name: project1
networks:
- app-tier
volumes:
- ./Framework:/app
- ./CodeService1:/app/service1
service2:
image: 'bitnami/php-fpm:8.0.2'
container_name: Service1
networks:
- app-tier
volumes:
- ./Framework:/app
- ./CodeService2:/app/service2
nginx:
image: 'bitnami/nginx:latest'
depends_on:
- phpfpm
- service2
networks:
- app-tier
ports:
- '80:8080'
- '443:8443'
volumes:
- ./nginx/conf/myapp.conf:/opt/bitnami/nginx/conf/server_blocks/myapp.conf
networks:
app-tier:
driver: bridge
currently the index-files looks like
CodeService1\index.php
<?php declare (strict_types = 1);
echo ("Service1</br>");
CodeService2\index.php
<?php declare (strict_types = 1);
echo ("Service2</br>");
But this won't work. I also tried to outsource the part of create the service (image and copy files) to separates Dockerfiles. but this also won't run.
i call localshost/service1 or localshost/service1 or .
thanks a lot
Most probable is that in your nginx host you set upstream to phpfpm
set $upstream phpfpm
and that's why CodeService1 only is resolved.
You can set upstream conditionaly, e.g:
# set default to codeservice1
set $upstream phpfpm:9000;
# if service2 url, resolve from service2
if ($request_uri ~ "(^/service2)"){
set $upstream service2:9000
}
fastcgi_pass $upstream;
Related
What I have : a VPS with an its IPV4 IPADRESS and a valid domain name binded to it with an A record in my provider DNS control panel.
Lets call my domain name : mydomain.com and my IPV4 ip adress denoted as IPADRESS for debugging purposes.
What I want : a nextcloud instance and django-based blog running in parallel on my VPS and being able to acces to them respectfully by accessing cloud.mydomain.com for my nextcloud instance and blog.mydomain.com for my django-based blog throught HTTPS.
What i've done :
I've tried to use nginx-proxy + its letsencrypt companion with a docker framework.
First of all, here my working directory is /home/ubuntu/.
Here is tree /home/ubuntu/ -L 2 output :
.
├── mywebsite-django
│ └── mysite
│ ├── Dockerfile
│ ├── blog
│ ├── config
│ ├── db.sqlite3
│ ├── docker-compose.yml
│ ├── manage.py
│ ├── mywebsite
│ ├── nginx
│ ├── requirements.txt
│ └── staticfiles
├── nextcloud_setup
│ ├── app
│ │ ├── config
│ │ ├── custom_apps
│ │ ├── data
│ │ └── themes
│ ├── docker-compose.yml
│ └── proxy
│ ├── certs
│ ├── conf.d
│ ├── html
│ └── vhost.d
└── nginx_setup
├── certs
│ ├── mydomain.com
│ ├── blog.mydomain.com
│ ├── default.crt
│ ├── default.key
│ └── dhparam.pem
├── conf.d
│ └── default.conf
├── docker-compose.yml
├── html
├── nginx.tmpl
├── templates
│ └── nginx.tmpl
└── vhost.d
└── default
26 directories, 14 files
Then i create a docker network :
So i run sudo docker network create nginx-proxy.
Then i run my nginx-proxy+letsencrypt container :
cd nginx_setup + sudo docker-compose up -d
where nginx_setup/docker-compose.ymlis :
version: '3'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: unless-stopped
logging:
options:
max-size: "10m"
max-file: "3"
ports:
- "80:80"
- "443:443"
volumes:
- /home/ubuntu/nginx_setup/conf.d:/etc/nginx/conf.d
- /home/ubuntu/nginx_setup/vhost.d:/etc/nginx/vhost.d
- /home/ubuntu/nginx_setup/html:/usr/share/nginx/html
- /home/ubuntu/nginx_setup/certs:/etc/nginx/certs:ro
environment:
DEFAULT_HOST: "mydomain.com"
nginx-gen:
image: jwilder/docker-gen
container_name: nginx-gen
restart: unless-stopped
volumes:
- /home/ubuntu/nginx_setup/conf.d:/etc/nginx/conf.d
- /home/ubuntu/nginx_setup/vhost.d:/etc/nginx/vhost.d
- /home/ubuntu/nginx_setup/html:/usr/share/nginx/html
- /home/ubuntu/nginx_setup/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:rw
- /home/ubuntu/nginx_setup/templates/:/etc/docker-gen/templates:ro
command: -notify-sighup nginx -watch -only-exposed /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: unless-stopped
volumes:
- /home/ubuntu/nginx_setup/conf.d:/etc/nginx/conf.d
- /home/ubuntu/nginx_setup/vhost.d:/etc/nginx/vhost.d
- /home/ubuntu/nginx_setup/html:/usr/share/nginx/html
- /home/ubuntu/nginx_setup/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
networks:
default:
external:
name: nginx-proxy
The nginx.tmpl is defined as follow :
server {
listen 80 default_server;
server_name _; # This is just an invalid value which will never trigger on a real hostname.
error_log /proc/self/fd/2;
access_log /proc/self/fd/1;
return 503;
}
{{ range $host, $containers := groupByMulti $ "Env.VIRTUAL_HOST" "," }}
upstream {{ $host }} {
{{ range $index, $value := $containers }}
{{ $addrLen := len $value.Addresses }}
{{ $network := index $value.Networks 0 }}
{{/* If only 1 port exposed, use that */}}
{{ if eq $addrLen 1 }}
{{ with $address := index $value.Addresses 0 }}
# {{$value.Name}}
server {{ $network.IP }}:{{ $address.Port }};
{{ end }}
{{/* If more than one port exposed, use the one matching VIRTUAL_PORT env var */}}
{{ else if $value.Env.VIRTUAL_PORT }}
{{ range $i, $address := $value.Addresses }}
{{ if eq $address.Port $value.Env.VIRTUAL_PORT }}
# {{$value.Name}}
server {{ $network.IP }}:{{ $address.Port }};
{{ end }}
{{ end }}
{{/* Else default to standard web port 80 */}}
{{ else }}
{{ range $i, $address := $value.Addresses }}
{{ if eq $address.Port "80" }}
# {{$value.Name}}
server {{ $network.IP }}:{{ $address.Port }};
{{ end }}
{{ end }}
{{ end }}
{{ end }}
}
server {
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
server_name {{ $host }};
proxy_buffering off;
error_log /proc/self/fd/2;
access_log /proc/self/fd/1;
location / {
proxy_pass http://{{ trim $host }};
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
{{ end }}
got from here
Note : Once running, when i run sudo docker-compose logs from /home/ubuntu/nginx_setup/, nothing appears to be wrong..
Then i run my django container :
cd /home/ubuntu/mywebsite-django/mysite/ + sudo docker-compose up -d
My file /home/ubuntu/mywebsite-django/mysite/docker-compose.ymlis defined by :
version: '3'
services:
gunicorn:
container_name: myblog
build: .
command: sh -c "python manage.py makemigrations &&
python manage.py migrate &&
python manage.py collectstatic --noinput &&
gunicorn --bind 0.0.0.0:8000 --workers 2 mywebsite.wsgi:application"
volumes:
- ./staticfiles:/static
environment:
VIRTUAL_HOST: blog.mydomain.com
VIRTUAL_PORT: 8000
LETSENCRYPT_HOST: mydomain.com
LETSENCRYPT_EMAIL: mymail#forletsecrypt.com
ports:
- "8000:8000"
networks:
default:
external:
name: nginx-proxy
Note : Once running, when i run sudo docker-compose logs from /home/ubuntu/mywebsite-django/mysite/, nothing appears to be wrong..
What i get :
curl blog.mydomain.com output :
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx/1.23.2</center>
</body>
</html>
Note : i did not try to launch my nextcloud instance since even my django app does not work
Whats wrong here ?
Here some details on my machine :
sudo docker network ls output:
NETWORK ID NAME DRIVER SCOPE
ce90ed81eade bridge bridge local
c6325fd6c267 host host local
834d9a715380 nginx-proxy bridge local
78c28ce57f15 none null local
and
sudo ufw status verbose output
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
To Action From
-- ------ ----
80,443/tcp (Nginx Full) ALLOW IN Anywhere
22/tcp ALLOW IN Anywhere
80,443/tcp (Nginx Full (v6)) ALLOW IN Anywhere (v6)
22/tcp (v6) ALLOW IN Anywhere (v6)
I'm beginer in Docker and I tryed to deploy my app with HTTPS powered by Traefik by this article.
I did everything according to the instructions for my project, but I got an error:
ModuleNotFoundError: No module named 'app'
How can I solve this issue?
app structure:
└── testapp
├── app
│ ├── __init__.py
│ └── main.py
├── docker-compose.override.yml
├── docker-compose.traefik.yml
├── docker-compose.yml
├── Dockerfile
├── __init__.py
└── requirements.txt
Dockerfile:
FROM python:3.9
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
docker-compose.yml:
#!/bin/bash
services:
backend:
build: ./
command: uvicorn app.main:app --host 0.0.0.0
ports:
# Listen on port 80, default for HTTP, necessary to redirect to HTTPS
- 8000:8000
restart: always
labels:
# Enable Traefik for this specific "backend" service
- traefik.enable=true
# Define the port inside of the Docker service to use
- traefik.http.services.app.loadbalancer.server.port=80
# Make Traefik use this domain in HTTP
- traefik.http.routers.app-http.entrypoints=http
- traefik.http.routers.app-http.rule=Host(`mydomain.com`)
# Use the traefik-public network (declared below)
- traefik.docker.network=traefik-public
# Make Traefik use this domain in HTTPS
- traefik.http.routers.app-https.entrypoints=https
- traefik.http.routers.app-https.rule=Host(`mydomain.com`)
- traefik.http.routers.app-https.tls=true
# Use the "le" (Let's Encrypt) resolver
- traefik.http.routers.app-https.tls.certresolver=le
# https-redirect middleware to redirect HTTP to HTTPS
- traefik.http.middlewares.https-redirect.redirectscheme.scheme=https
- traefik.http.middlewares.https-redirect.redirectscheme.permanent=true
# Middleware to redirect HTTP to HTTPS
- traefik.http.routers.app-http.middlewares=https-redirect
- traefik.http.routers.app-https.middlewares=admin-auth
networks:
# Use the public network created to be shared between Traefik and
# any other service that needs to be publicly available with HTTPS
- traefik-public
networks:
traefik-public:
external: true
docker-compose.traefic.yml:
services:
traefik:
# Use the latest v2.3.x Traefik image available
image: traefik:v2.8
ports:
# Listen on port 80, default for HTTP, necessary to redirect to HTTPS
- 80:80
# Listen on port 443, default for HTTPS
- 443:443
restart: always
labels:
# Enable Traefik for this service, to make it available in the public network
- traefik.enable=true
# Define the port inside of the Docker service to use
- traefik.http.services.traefik-dashboard.loadbalancer.server.port=8080
# Make Traefik use this domain in HTTP
- traefik.http.routers.traefik-dashboard-http.entrypoints=http
- traefik.http.routers.traefik-dashboard-http.rule=Host(`traefic.mydomain.com`)
# Use the traefik-public network (declared below)
- traefik.docker.network=traefik-public
# traefik-https the actual router using HTTPS
- traefik.http.routers.traefik-dashboard-https.entrypoints=https
- traefik.http.routers.traefik-dashboard-https.rule=Host(`traefic.mydomain.com`)
- traefik.http.routers.traefik-dashboard-https.tls=true
# Use the "le" (Let's Encrypt) resolver created below
- traefik.http.routers.traefik-dashboard-https.tls.certresolver=le
# Use the special Traefik service api#internal with the web UI/Dashboard
- traefik.http.routers.traefik-dashboard-https.service=api#internal
# https-redirect middleware to redirect HTTP to HTTPS
- traefik.http.middlewares.https-redirect.redirectscheme.scheme=https
- traefik.http.middlewares.https-redirect.redirectscheme.permanent=true
# traefik-http set up only to use the middleware to redirect to https
- traefik.http.routers.traefik-dashboard-http.middlewares=https-redirect
# admin-auth middleware with HTTP Basic auth
# Using the environment variables USERNAME and HASHED_PASSWORD
- traefik.http.middlewares.admin-auth.basicauth.users=${USERNAME?Variable not set}:${HASHED_PASSWORD?Variable not set}
# Enable HTTP Basic auth, using the middleware created above
- traefik.http.routers.traefik-dashboard-https.middlewares=admin-auth
volumes:
# Add Docker as a mounted volume, so that Traefik can read the labels of other services
- /var/run/docker.sock:/var/run/docker.sock:ro
# Mount the volume to store the certificates
- traefik-public-certificates:/certificates
command:
# Enable Docker in Traefik, so that it reads labels from Docker services
- --providers.docker
# Do not expose all Docker services, only the ones explicitly exposed
- --providers.docker.exposedbydefault=false
# Create an entrypoint "http" listening on port 80
- --entrypoints.http.address=:80
# Create an entrypoint "https" listening on port 443
- --entrypoints.https.address=:443
# Create the certificate resolver "le" for Let's Encrypt, uses the environment variable EMAIL
- --certificatesresolvers.le.acme.email=mymail#mail.com
# Store the Let's Encrypt certificates in the mounted volume
- --certificatesresolvers.le.acme.storage=/certificates/acme.json
# Use the TLS Challenge for Let's Encrypt
- --certificatesresolvers.le.acme.tlschallenge=true
# Enable the access log, with HTTP requests
- --accesslog
# Enable the Traefik log, for configurations and errors
- --log
# Enable the Dashboard and API
- --api
networks:
# Use the public network created to be shared between Traefik and
# any other service that needs to be publicly available with HTTPS
- traefik-public
volumes:
# Create a volume to store the certificates, there is a constraint to make sure
# Traefik is always deployed to the same Docker node with the same volume containing
# the HTTPS certificates
traefik-public-certificates:
networks:
# Use the previously created public network "traefik-public", shared with other
# services that need to be publicly available via this Traefik
traefik-public:
external: true
docker-compose.override.yml:
services:
backend:
ports:
- 80:80
networks:
traefik-public:
external: false
I added the line command: uvicorn app.main:app --host 0.0.0.0 compared to the tutorial because otherwise it gave the error Error response from daemon: No command specified
I have FastAPI app running in docker docker container. It works well except only one thing
The app doesn't reload if any changes. The changes applied only if i restart the container. But i wonder why it doesn't reload app if i put in command --reload flag?
I understand that docker itself do not reload if some changes in code. But app must be if flag --reload in command .
If I misunderstand, please advise how to achieve what i want. Thanks
main.py
from typing import Optional
import uvicorn
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
if __name__ == '__main__':
uvicorn.run(app, host="0.0.0.0", port=8000, reload=True)
docker-compose.yml
version: "3"
services:
web:
build: .
restart: always
command: bash -c "uvicorn main:app --host 0.0.0.0 --port 8000 --reload"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
ports:
- "50009:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=test_db
this works for me
version: "3.9"
services:
people:
container_name: people
build: .
working_dir: /code/app
command: uvicorn main:app --host 0.0.0.0 --reload
environment:
DEBUG: 1
volumes:
- ./app:/code/app
ports:
- 8008:8000
restart: on-failure
this is my directory structure
.
├── Dockerfile
├── Makefile
├── app
│ └── main.py
├── docker-compose.yml
└── requirements.txt
make sure working_dir and volumes section's - ./app:/code/app match
example run:
docker-compose up --build
...
Attaching to people
people | INFO: Will watch for changes in these directories: ['/code/app']
Are you starting the container with docker compose up? This is working for me with hot reload at http://127.0.0.1.
version: "3.9"
services:
bff:
container_name: bff
build: .
working_dir: /code/app
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
environment:
DEBUG: 1
volumes:
- .:/code
ports:
- "80:8000"
restart: on-failure
Also, I don't have your final two lines, if __name__ == etc., in my app. Not sure if that would change anything.
I found this solution that worked for me, in this answer.
In the watchfiles documentation it is understood that the detection relies on file system notifications, and I think that via docker its events are not launched when using a volume.
Notify will fall back to file polling if it can't use file system
notifications
So you have to tell watchfiles to force the polling, that's what you did in your test python script with the parameter force_polling and that's why it works:
for changes in watch('/code', force_polling=True):
Fortunately in the documentation we are given the possibility to force the polling via the environment variable WATCHFILES_FORCE_POLLING. Add this environment variable to your docker-compose.yml and auto-reload will work:
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
environment:
- WATCHFILES_FORCE_POLLING=true
Found out about appwrite.io and I really like the features appwrite offers. It's similar to the Firebase, but open source.
I'm trying to make appwrite work with Python/FastAPI.
Bellow it the folder structure of the project. Folder api will contain all additional the logic. Dockerfile is taken from uvicorn-gunicorn-fastapi-docker repo.
├── project
│ ├── docker-compose.yml
│ ├── app
│ ├── api
│ ├── Dockerfile
│ ├── main.py
│ ├── requirements.txt
└──
In the docker-compose file I added the app service which starts FastAPI.
version: '3'
services:
app:
build: ./app
ports:
- 3000:3000
restart: unless-stopped
volumes:
- ./app:/usr/src/app
traefik:
image: traefik:v2.1.4
command:
- --providers.file.directory=/storage/config
- --providers.file.watch=true
- --providers.docker=true
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
restart: unless-stopped
ports:
- 5000:80
- 443:443
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- appwrite-config:/storage/config:ro
- appwrite-certificates:/storage/certificates:ro
depends_on:
- appwrite
networks:
- gateway
- appwrite
appwrite:
image: appwrite/appwrite:0.6.2
restart: unless-stopped
networks:
- appwrite
labels:
- traefik.http.routers.appwrite.rule=PathPrefix(`/`)
- traefik.http.routers.appwrite-secure.rule=PathPrefix(`/`)
- traefik.http.routers.appwrite-secure.tls=true
volumes:
- appwrite-uploads:/storage/uploads:rw
- appwrite-cache:/storage/cache:rw
- appwrite-config:/storage/config:rw
- appwrite-certificates:/storage/certificates:rw
depends_on:
- mariadb
- redis
- smtp
- influxdb
- telegraf
environment:
- _APP_ENV=production
- _APP_OPENSSL_KEY_V1=your-secret-key
- _APP_DOMAIN=localhost
- _APP_DOMAIN_TARGET=localhost
- _APP_REDIS_HOST=redis
- _APP_REDIS_PORT=6379
- _APP_DB_HOST=mariadb
- _APP_DB_PORT=3306
- _APP_DB_SCHEMA=appwrite
- _APP_DB_USER=user
- _APP_DB_PASS=password
- _APP_INFLUXDB_HOST=influxdb
- _APP_INFLUXDB_PORT=8086
- _APP_STATSD_HOST=telegraf
- _APP_STATSD_PORT=8125
- _APP_SMTP_HOST=smtp
- _APP_SMTP_PORT=25
mariadb:
image: appwrite/mariadb:1.0.3
restart: unless-stopped
networks:
- appwrite
volumes:
- appwrite-mariadb:/var/lib/mysql:rw
environment:
- MYSQL_ROOT_PASSWORD=rootsecretpassword
- MYSQL_DATABASE=appwrite
- MYSQL_USER=user
- MYSQL_PASSWORD=password
command: 'mysqld --innodb-flush-method=fsync'
smtp:
image: appwrite/smtp:1.0.1
restart: unless-stopped
networks:
- appwrite
environment:
- MAILNAME=appwrite
- RELAY_NETWORKS=:192.168.0.0/24:10.0.0.0/16
redis:
image: redis:5.0
restart: unless-stopped
networks:
- appwrite
volumes:
- appwrite-redis:/data:rw
influxdb:
image: influxdb:1.6
restart: unless-stopped
networks:
- appwrite
volumes:
- appwrite-influxdb:/var/lib/influxdb:rw
telegraf:
image: appwrite/telegraf:1.0.0
restart: unless-stopped
networks:
- appwrite
networks:
gateway:
appwrite:
volumes:
appwrite-mariadb:
appwrite-redis:
appwrite-cache:
appwrite-uploads:
appwrite-certificates:
appwrite-influxdb:
appwrite-config:
I tried docker-compose links and networks to appwrite but none of them worked.
Bellow is the error I get when I try to use python appwrite sdk.
{"result":"HTTPConnectionPool(host='localhost', port=5000): Max retries exceeded with url: /v1/database/collections (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5517795d60>: Failed to establish a new connection: [Errno 111] Connection refused'))"}
Adding custom Docker containers to Appwrite
https://dev.to/streamlux/adding-custom-docker-containers-to-appwrite-2chp
Can also refer to -
Learn How to Run Appwrite With Your Own Custom Proxy or Load Balancer
https://dev.to/appwrite/learn-how-to-run-appwrite-with-your-own-custom-proxy-or-load-balancer-28k
I have an testApp.war which I'd like to deploy on Tomcat through docker (docker is on 10.0.2.157). My testApp will work properly only with postgres DB and specified user testUser and password testUserPasswd. I built such a structure:
.
├── db
│ ├── Dockerfile
│ ├── pg_hba.conf
│ └── postgresql.conf
├── docker-compose.yml
└── web
├── context.xml
├── Dockerfile
├── software
│ └── testApp.war
└── tomcat-users.xml
Content of all these files are attached below. I start my containers with command:
docker-compose up -d
However when I go to Tomcat on webbrowser (http://10.0.2.157:8282/manager/html) and try to start my testApp I got:
HTTP Status 404 – Not Found
Type Status Report
Message /testApp/
Description The origin server did not find a current representation
for the target resource or is not willing to disclose that one exists.
Apache Tomcat/8.5.20
What I'm doing wrong? Could you help me with this?
db/Dockerfile
FROM postgres:9.5
MAINTAINER riwaniak
ENV POSTGRES_USER testUser
ENV POSTGRES_PASSWORD testUserPasswd
ENV POSTGRES_DB testUser
ADD pg_hba.conf /etc/postgresql/9.5/main/
ADD postgresql.conf /etc/postgresql/9.5/main/
db/pg_hba.conf
local all all trust
host all all 127.0.0.1/32 md5
host all all 0.0.0.0/0 md5
host all
db/postgresql.conf
listen_addresses='*'
web/context.xml
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="false" privileged="true" >
<!--
<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
-->
</Context>
web/Dockerfile
FROM tomcat:8.5.20-jre8
MAINTAINER riwaniak
COPY ./software /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
web/tomcat-users.xml
<?xml version="1.0" encoding="UTF-8"?>
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
version="1.0">
<role rolename="tomcat"/>
<role rolename="admin-gui"/>
<role rolename="manager-gui"/>
<user username="tomcat" password="tomcat" roles="tomcat,admin-gui,manager-gui"/>
</tomcat-users>
and finally docker-compose.yml
version: '2'
services:
testApp:
build: ./web
volumes:
- /path/to/tomcat/folder/web/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
- /path/to/tomcat/folder/web/context.xml:/usr/local/tomcat/webapps/HelpdeskApp/META-INF/context.xml
- /path/to/tomcat/folder/web/context.xml:/usr/local/tomcat/webapps/host-manager/META-INF/context.xml
- /path/to/tomcat/folder/web/context.xml:/usr/local/tomcat/webapps/manager/META-INF/context.xml
ports:
- "8282:8080"
links:
- testAppdb
networks:
- testAppnet
testAppdb:
build: ./db
ports:
- "5555:5432"
volumes:
- /srv/docker/postgresql:/var/lib/postgresql
- /path/to/tomcat/folder/db/postgresql.conf:/etc/postgresql/9.5/main/postgresql.conf
- /path/to/tomcat/folder/db/pg_hba.conf:/etc/postgresql/9.5/main/pg_hba.conf
command: postgres -c config_file=/etc/postgresql/9.5/main/postgresql.conf
networks:
- testAppnet
networks:
testAppnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
OK, I got the solution!
Thanks #Tarun Lalwani for supporting and suggestion.
I had wrong application.yml configuration in Tomcat container. Docker mapped ip addresses of my containers but I shouldn't write just "10.0.2.157" but name of containers. So in my example I've got smth like below:
(...)
environments:
development:
dataSource:
dbCreate: update
url: jdbc:postgresql://10.0.2.157:5432/helpdesk_dev
(...)
However right solution was to map name of postgres container (testAppdb), so correct conf is:
(...)
environments:
development:
dataSource:
dbCreate: update
url: jdbc:postgresql://testAppdb:5432/test_dev
(...)