Let external container know about subnets created by multiple docker-compose projects - docker-compose

I have a multi-client setup, each client having similar application stack. To simplify it a bit, let's say a typical tomcat web app working on a db.
Clients must be completely isolated from each other, each with their on application stack, including a standalone tomcat).
VirtualHost configurations on httpd map domains to the corresponding tomcat instance via AJP.
For example:
client1.example.com -> 172.18.0.2:8009
client2.example.com -> 172.19.0.2:8009
I found docker-compose very useful within each client. One of the things I like is that it even takes care of creating private subnets for each client and containers within that client don't even need to know other containers' IP address, but can use aliases that are automatically set up by the links configuration. Pretty neat.
httpd
clients
client1
docker-compose.yml
tomcat
db
client2
docker-compose.yml
tomcat
db
Now the problem. httpd (which I would also want to dockerize) needs to reach tomcat for all clients. However, it doesn't know their IP.
How can I realize this mapping in VirtualHost? I.e. how to solve the question marks below?
<VirtualHost *:443>
ServerName client1.example.com
..
ProxyPass / ajp://?????:8009/
..
</VirtualHost>
One thing I have tried is to have some global env file, with all subnets and IPs explicit, like:
CLIENT1_TOMCAT=172.18.0.2
CLIENT2_TOMCAT=172.19.0.2
and use this in both each client's yml and httpd yml.
That's pretty ugly though, because custom IP configuration within a yml is either for all or for no service. I was hoping I could set the one for tomcat and let the others automatic, but they end up overlapping. All the sudden, specifying IP configurations for all services and all clients, makes the whole thing a lot less elegant.
I also know it is possible to inspect a container's IP like this:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' client1_tomcat
However, this requires the container to exist already. Also, if I use this method then I have to modify httpd's configuration every time the container is recreated, as it may change IPs.
I guess that somehow I wold like httpd's yml to "include" the clients' ones, so that it can know about their subnets. Still, keeping clients' subnets separated.
Any idea how I should approach this?
UPDATE
working setup based on #tcnj's answer:
Content of httpd/docker-compose.yml:
version: '2'
services:
httpd:
image: httpd
ports:
- "80:80"
..
networks:
- client1_default
- client2_default
external_links:
- client1_tomcat_1
- client2_tomcat_1
networks:
client1_default:
external: true
client2_default:
external: true

The easiest way to solve your problem is to put httpd on all of the client networks, and then refer to the tomcat containers by their name.
Eg:
You have client1 with network client1_default, and with the container client1_tomcat.
You also have client2 with network client2_default, and with the container client2_tomcat.
If you're starting httpd from the command line then you can add --network client1_default --network client2_default as extra options to docker run before the image name. If you're using docker compose for httpd as well then you can use the following:
httpd:
...
networks:
- client1_default
- client2_default
...
In your httpd config you can then have:
<VirtualHost *:443>
ServerName client1.example.com
...
ProxyPass / ajp://client1_tomcat:8009/
...
</VirtualHost>
<VirtualHost *:443>
ServerName client2.example.com
...
ProxyPass / ajp://client2_tomcat:8009/
...
</VirtualHost>

Related

Does uWSGI need to create an http router when I want to use ngingx as my server?

The uWSGI options of a Docker image I run contains the following option: --http :5000.
From the uWSGI docs I understand it is there to:
add an http router/server on the specified address
I want to use ngingx as described in this answer and use the uwsgi_pass setting. Do I still need to keep the --http :5000 option of uWSGI, or would that create two servers serving the same web app?

Docker: run multiple container on same tcp ports with different hostname

Is there a way to run multiple docker containers on the same ports? For example, I have used the ports 80/443 (HTTP), 3306 (TCP/MySQL) and 22 (TCP/SSH) in my docker-compose file. Now I want to run this docker-compose for different hostnames on the same ip address on my machine.
- traffic from example1.com (default public ip) => container1
- traffic from example2.com (default public ip) => container2
I have already found a solution only for the HTTP traffic by using an additional nginx/haproxy as a proxy on my machine. But unfortunately, this can't handle other TCP ports.
This isn't possible in the general (non-HTTP) case.
At a lower level, if I connect to 10.20.30.40:3306, the Linux kernel selects a single process that's listening on that port and sends the request there. You're not allowed to bind(2) a second process to the same port. (This is also why you get an error if you try to docker run -p picking a host port that's already in use.)
In the case of HTTP, there's the further detail that the host-name part of the URL is also sent in an HTTP Host: header: the Web browser both does a DNS lookup for e.g. stackoverflow.com and connects to its IP address, and also sends a Host: stackoverflow.com HTTP header. That's the specific mechanism that lets you run a proxy on port 80, and then forward to some other backend service via a virtual-host setup.
That mechanism is very specific to HTTP, though, and doesn't work for other protocols that don't have support for it. I don't think either MySQL or ssh have similar mechanisms in their wire protocol.
(In the particular situation you describe this is probably relatively easy to handle. You wouldn't want to make either your internal database or an sshd visible publicly, so delete their ports: from your docker-compose.yml file, and then just worry about proxying the HTTP service. It's pretty unusual and a complex setup to run sshd in Docker so you also might remove that and simplify your stack a little.)

nginx reverse proxy a REST service alternate 200 and 404 responses for same uri

the nginx.conf:
server {
listen 8080;
}
server {
listen 80;
server_name localhost;
location / {
root /test/public;
index index.html index.htm;
}
location /api {
proxy_pass http://localhost:8080;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
the request and reponse headers are almost plain, no auth/session/cache parameters involved.
For same uri, first request will return successfully, while second will return 404, and so on.
I've tried disabling proxy buffering, but has no effect.
I'm 99.99% sure you have IPv6 enabled. In that case localhost resolves into two IP addresses 127.0.0.1 and [::1] and nginx balancing requests between them.
http://nginx.org/r/proxy_pass:
If a domain name resolves to several addresses, all of them will be used in a round-robin fashion.
On the other hand, you have listen 8080; directive that tends to listens only to IPv4 addresses (depends on OS, nginx version and other environment).
You could solve you problem in several ways:
use explicit IPv4 address proxy_pass http://127.0.0.1:8080;
use explicit IPv4 and IPv6 listen listen [::]:8080 ipv6only=off;
I observed the same problem in a docker enviroment. But the reason was independed from nginx. I just made a stupid copy-paste-mistake.
The setting:
I deployed the docker container by several docker-compose files. So I have following structure:
API-Gateway-Container based on nginx which references to
Webserver 1 based on nginx and
Webserver 2 based on nginx
Each of them has its own docker and docker-compose file. Because the structure of the compose-files for Webserver1 and Webserver2 is very similiar, I copied it and replaced the container name and some other stuff. So far so good. Starting and stopping the containers was no problem, watching them by docker container ls shows no abnormality. Accessing Webserver1 and Webserver2 by http://localhost:<Portnumber for server> was no problem, but accessing Webserver1 through the api gateway leads to alternating 200 and 404 responses, while Webserver2 works well.
After days of debugging I found the problem: As I mentioned I copied the docker-compose file from Webserver1 for Webserver2 and while I replaced the container name, I forgot to replace the service name. My docker-compose file starts like
version: '3'
services:
webserver1:
image: 'nginx:latest'
container_name: webserver2
...
This constellation also leads to the described behavior.
Hope, someone can save some days or hours by reading this post ;-)
André
Well. In my case, the problem was pretty straightforward. What was happening was, I had about 15 server blocks, and the port that I setup for my nodejs proxy_pass was already being used on some old server block hiding in my enabled servers directory. So nginx randomly was proxy passing to the old server which was not running and the one I just started.
So I just greped for the port number in the directory and found 2 instances. Changed my port number and the problem was fixed.

HTTP/2 nginx force redirect HTTP to HTTPS breaks expected behaviour

I've setup an example project that uses the latest version of nginx which supports HTTP/2.
I was going off this official blog post: https://www.nginx.com/blog/nginx-1-9-5/
Here is a working code example (with details of how to setup everything within the README - nginx.conf pasted below as well): https://github.com/Integralist/Docker-Examples/tree/master/Nginx-HTTP2
user nobody nogroup;
worker_processes auto;
events {
worker_connections 512;
}
http {
upstream app {
server app:4567;
}
server {
listen *:80 http2;
location /app/ {
return 301 https://$host$request_uri;
}
}
server {
listen *:443 ssl http2;
server_name integralist.com;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_trusted_certificate /etc/nginx/certs/ca.crt;
location /app/ {
proxy_pass http://app/;
}
}
}
Although the example works, I've hit an issue where by if I go to the service endpoint within my browser using HTTP, it'll first download a file called download and then redirect correctly to HTTPS.
I'm not sure what this file is or why the redirection causes it to happen, but its content is ġˇˇˇˇˇˇ ?
If I try using curl (e.g. curl --insecure http://$dev_ip:$dev_80/app/foo) the redirect fails to happen and I think it's because of this weird downloaded file? The response from curl to stdout is just ??????
I wonder if this is possibly a side-effect of using Docker to containerize the Ruby application and the nginx server?
Update
I removed http2 from listen *:80 http2; so it now reads listen *:80; and the download doesn't happen but the problem I have is trying to get the redirect to point to the correct docker port now :-/
To clarify, I have an nginx container with dynamic port generation (-P). One port for accessing the containerized nginx service on :80 and one for :443 - my nginx.conf is redirecting traffic from HTTP to HTTPS but I need to be able to identify the 443 port number.
e.g. docker ps shows 0.0.0.0:32791->80/tcp, 0.0.0.0:32790->443/tcp
I'm trying to redirect http://example.com:32791/app/foo to https://example.com:32790/app/foo
I'm not sure how I could configure nginx to know the relevant 443 Docker port number as that's dynamically generated?
I could be explicit and use -p when running the nginx container. But I'd still need to pass that port number into nginx as a variable somehow (a quick google would suggest using Docker's -e "my_port=9999" and then access it using nginx's env declaration)
Update 2
I've even tried swapping to explicit ports -p and then hardcoding the port number into the nginx.conf with no luck (e.g. docker run ... -p 60443:443 -p 60080:80)...
listen *:80;
location /app/ {
return 301 https://$host:60443$request_uri;
}
...if I hit http://example.com:60080/app/ it gets redirected to https://example.com:60443/ so almost there but the actual path /app/ wasn't added to the end when redirecting?
the problem I have is trying to get the redirect to point to the correct docker port now
If your nginx docker container is linked (docker run --link) with the ruby one, the correct port is the one EXPOSE(d) in the ruby container image (even if you don't know to which host port that EXPOSE port was mapped)
A direct link container-to-container can use the port mentioned in the dockerfile.

mod_perl and multiple virtual hosts

We have this situation:
- Apache running mod_perl
- Multiple virtual hosts with own directories
- Each virtual module has the same name for perl modules (development hosts, module differ a little bit, but have the same names)
- Apache2::Reload for each virtual host to reload module on change
But apache throws 500 error on every 1/3 requests for the page reload and without specific error in the log, only warnings about "redefined functions".
Maybe there are some requirements to run the same module names but different paths and distinct them?
Here is how its done:
NameVirtualHost 192.168.0.140
<VirtualHost 192.168.0.140>
PerlOptions +Parent
PerlSwitches -Mlib=/path/to/application
DocumentRoot /path/to/application
ServerName name.domain.com
</VirtualHost>
No, you cannot "run the same module name but with different paths". Perl just does not work that way. If you want to have multiple environments, keep them separate. You can run many Apache instances with different configurations (see the -f *configfilename* option) on various ports. Then in each vhost in the main server, reverse proxy to the back-end server on the corresponding port.