I am trying to quickly set up a database for some friends to connect to (security, efficiency, and durability are not huge concerns), but I cannot determine what is causing my connection attempts to time out. Pretty much this unanswered question.
PostgreSQL and PGAdmin are created via docker-compose on (let's say) 192.168.1.100.
Everything starts fine. I confirmed that listen_addresses = '*' in the pg conf. Firewall is allowing 5432 and 5050 (pgadmin) to my local network, where my nginx server will pick it up
5050 ALLOW 192.168.1.0/24 # pgadmin
5432 ALLOW 192.168.1.0/24 # postgres
The ngnix server is redirecting a subdomain to the original server's IP and port, like so:
server {
listen 80;
server_name pg.mydomain.net;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name pg.mydomain.net;
proxy_read_timeout 600s;
location / {
proxy_pass http://192.168.1.100:5432;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
ssl_certificate /etc/letsencrypt/live/mydomain.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.net/privkey.pem;
}
(this block is boilerplate that I use for all my quick projects, works 90% of the time)
Then on cloudflare I add a CNAME entry for pg.mydomain.net (also one for pgadmin.mydomain.net, which works flawlessly).
But the connection string postgresql://myuser:mypw#pg.mydomain.net:5433/mydb isn't working like it does when I access it by its local ip address directly. I'm thinking the problem lies with nginx. I'm hoping for a solution that allows my users to construct a similarly simple connection string in a Jupyter Notebook.
As was hinted at in the comments but not stated explicitly was that requests to a postgres server are not HTTP/HTTPS, thus it is probably not able to go through nginx.
The reason I (and anyone) would want it to pass through a reverse proxy is to limit the surface area of my network which is exposed to the internet to a single server. It had worked for all kinds of web apps provided by docker containers (including PGAdmin), but this assumption does not hold for data headed to a database.
I say probably not able to go through nginx because it seems like this is what the nginx stream module is for. But since that required a re-compiling and re-installation of nginx just to test a stream{} block, I did not invest the effort.
What ultimately worked is
punching a hole in my router for the postgres port, and forwarding that traffic directly to the postgres server (bypassing nginx)
setting the DNS CNAME record for the subdomain to be unproxied, whereby it doesn't benefit from Cloudflare hiding my true IP address but at least it does forward traffic correctly.
In the end, my users can connect via any database tool using hostname: pg.mydomain.net, port: 5432, username + password
Related
We are developing a multi-user VR system using Unity and have run into the following problem:
Configuration
The configuration is as follows:
Website – To be published on our server using IIS for Viewers to watch Presenters using their browsers.
VR Application – Run remotely by Presenters on dedicated laptops. Presenters use VR Headsets to explore an environment and share it with Viewers. Presenters connect to Host Application and establish a websocket connection using port 80.
Host Application – To run on our server for relaying websocket communication between all VR Presenters and browser Viewers.
Problem
When we run the Host application on our server the networking code failed to listen on port 80 for traffic. The error was: "An attempt was made to access a socket in a way forbidden by its access permissions". It appears that IIS, or perhaps the system, has taken full control of port 80 and preventing another resource from using it.
Constraints
Our client's network security constraints prevent us introducing a second server to run the Host Application (they can only connect to a defined list of IP addresses) and they cannot open new ports for us. So we have a single server (Windows Server 2016) and port 80 to play with!
Is it remotely possible to publish websites using IIS (HTTP port 80) AND run an application which listens and communicates websocket traffic also on port 80?
Development Information
During development we setup a second server and when we located the Host Application on this (everything still port 80) everything worked. This development server wasn't running IIS, just the Host Application. For the testing we could use our own computers and didn't have the IP constraints so we could point the browsers and Presenter laptops to the development server for websocket traffic.
Summary
Any thoughts, advise or guidance would be very welcome as currently we don't know which way to turn!
My suggestion
use another two port to published server and ws(e.g. 90 and 91), and use nginx(or apache?) to do the diversion according to the agreement from port 80, you can use the code like
# code in nginx
location / {#http
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Nginx-Proxy true;
proxy_set_header Connection "";
proxy_pass http://127.0.0.1:90;# real port of server
proxy_redirect default;
#root html;
#index index.html index.htm;
}
location /ws{#ws
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
client_max_body_size 10m;
proxy_pass http://127.0.0.1:91;# real port of ws
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_connect_timeout 300s;
proxy_read_timeout 300s;
proxy_send_timeout 300s;
}
detail is in here(PS: it is in Chinese) and plz care of the port 443(https and wss)
use other port
In fact, the same port can handle multiple different protocols, and websocket is an upgrade based on the http protocol.
But this is not recommended, because it may have some problems.
A port can only be monitored by one process, and port 80 in IIS is
monitored by w3svc. The server must take a certain method to detect
the protocol sent by the client. Once more protocols may be used in
the port, it will increase consumption.
Server can't support multiple protocols where the server speaks
first, because there is no way to detect the protocol of the
client. You can support a single server-first protocol with
multiple client-first protocols (by adding a short delay after
accept to see if the client will send data), but that's a bit
wonky.
I hope this helps others who also want to run an application on their server but find it cannot listen on port 80/443 because port 80/443 has been reserved for IIS. I finally solved the problem as follows:
I created a new Domain and pointed it to my webserver. I then created a new Website in IIS bound to my new Domain. I pointed the Website to an empty folder. I then downloaded and installed Microsoft's URL Rewrite extension for IIS along with the Application Request Routing extension:
URL Rewrite
Application Request Routing
After these were installed I could select my new Website in IIS and in the IIS section to the right double-click the new URL Rewrite option. An Add Rule dialog appears and I could select a Reverse Proxy option and create a simple rule to re-direct traffic to localhost:80 and back again for outbound traffic. Incredibly, it worked!
I'm just a developer trying to wrestle a complex project towards delivery so this may have its flaws or you may know of a better solution, but for me I can now move on.
I'm using google compute engine, and have set up a load balancer over an instance group. I have also reserved an ip address. Everything works just fine, if I access the specific port (8080), but if I just try to access the ip-address I get a '404 error'. I have also added a domain and have the same problem. domain.com:8080 works but just domain.com gives a '404 error'.
Is the same host and path as this (host: *, path: /*) guy enough? Or is there more configuration to be done. I can't seem to find this information in the docs.
setup the load balancer frontend to listen to port 80 instead of 8080 and/or 443 for HTTPS (requires SSL certificate...)
the nginx.conf:
server {
listen 8080;
}
server {
listen 80;
server_name localhost;
location / {
root /test/public;
index index.html index.htm;
}
location /api {
proxy_pass http://localhost:8080;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
the request and reponse headers are almost plain, no auth/session/cache parameters involved.
For same uri, first request will return successfully, while second will return 404, and so on.
I've tried disabling proxy buffering, but has no effect.
I'm 99.99% sure you have IPv6 enabled. In that case localhost resolves into two IP addresses 127.0.0.1 and [::1] and nginx balancing requests between them.
http://nginx.org/r/proxy_pass:
If a domain name resolves to several addresses, all of them will be used in a round-robin fashion.
On the other hand, you have listen 8080; directive that tends to listens only to IPv4 addresses (depends on OS, nginx version and other environment).
You could solve you problem in several ways:
use explicit IPv4 address proxy_pass http://127.0.0.1:8080;
use explicit IPv4 and IPv6 listen listen [::]:8080 ipv6only=off;
I observed the same problem in a docker enviroment. But the reason was independed from nginx. I just made a stupid copy-paste-mistake.
The setting:
I deployed the docker container by several docker-compose files. So I have following structure:
API-Gateway-Container based on nginx which references to
Webserver 1 based on nginx and
Webserver 2 based on nginx
Each of them has its own docker and docker-compose file. Because the structure of the compose-files for Webserver1 and Webserver2 is very similiar, I copied it and replaced the container name and some other stuff. So far so good. Starting and stopping the containers was no problem, watching them by docker container ls shows no abnormality. Accessing Webserver1 and Webserver2 by http://localhost:<Portnumber for server> was no problem, but accessing Webserver1 through the api gateway leads to alternating 200 and 404 responses, while Webserver2 works well.
After days of debugging I found the problem: As I mentioned I copied the docker-compose file from Webserver1 for Webserver2 and while I replaced the container name, I forgot to replace the service name. My docker-compose file starts like
version: '3'
services:
webserver1:
image: 'nginx:latest'
container_name: webserver2
...
This constellation also leads to the described behavior.
Hope, someone can save some days or hours by reading this post ;-)
André
Well. In my case, the problem was pretty straightforward. What was happening was, I had about 15 server blocks, and the port that I setup for my nodejs proxy_pass was already being used on some old server block hiding in my enabled servers directory. So nginx randomly was proxy passing to the old server which was not running and the one I just started.
So I just greped for the port number in the directory and found 2 instances. Changed my port number and the problem was fixed.
I've setup an example project that uses the latest version of nginx which supports HTTP/2.
I was going off this official blog post: https://www.nginx.com/blog/nginx-1-9-5/
Here is a working code example (with details of how to setup everything within the README - nginx.conf pasted below as well): https://github.com/Integralist/Docker-Examples/tree/master/Nginx-HTTP2
user nobody nogroup;
worker_processes auto;
events {
worker_connections 512;
}
http {
upstream app {
server app:4567;
}
server {
listen *:80 http2;
location /app/ {
return 301 https://$host$request_uri;
}
}
server {
listen *:443 ssl http2;
server_name integralist.com;
ssl_certificate /etc/nginx/certs/server.crt;
ssl_certificate_key /etc/nginx/certs/server.key;
ssl_trusted_certificate /etc/nginx/certs/ca.crt;
location /app/ {
proxy_pass http://app/;
}
}
}
Although the example works, I've hit an issue where by if I go to the service endpoint within my browser using HTTP, it'll first download a file called download and then redirect correctly to HTTPS.
I'm not sure what this file is or why the redirection causes it to happen, but its content is ġˇˇˇˇˇˇ ?
If I try using curl (e.g. curl --insecure http://$dev_ip:$dev_80/app/foo) the redirect fails to happen and I think it's because of this weird downloaded file? The response from curl to stdout is just ??????
I wonder if this is possibly a side-effect of using Docker to containerize the Ruby application and the nginx server?
Update
I removed http2 from listen *:80 http2; so it now reads listen *:80; and the download doesn't happen but the problem I have is trying to get the redirect to point to the correct docker port now :-/
To clarify, I have an nginx container with dynamic port generation (-P). One port for accessing the containerized nginx service on :80 and one for :443 - my nginx.conf is redirecting traffic from HTTP to HTTPS but I need to be able to identify the 443 port number.
e.g. docker ps shows 0.0.0.0:32791->80/tcp, 0.0.0.0:32790->443/tcp
I'm trying to redirect http://example.com:32791/app/foo to https://example.com:32790/app/foo
I'm not sure how I could configure nginx to know the relevant 443 Docker port number as that's dynamically generated?
I could be explicit and use -p when running the nginx container. But I'd still need to pass that port number into nginx as a variable somehow (a quick google would suggest using Docker's -e "my_port=9999" and then access it using nginx's env declaration)
Update 2
I've even tried swapping to explicit ports -p and then hardcoding the port number into the nginx.conf with no luck (e.g. docker run ... -p 60443:443 -p 60080:80)...
listen *:80;
location /app/ {
return 301 https://$host:60443$request_uri;
}
...if I hit http://example.com:60080/app/ it gets redirected to https://example.com:60443/ so almost there but the actual path /app/ wasn't added to the end when redirecting?
the problem I have is trying to get the redirect to point to the correct docker port now
If your nginx docker container is linked (docker run --link) with the ruby one, the correct port is the one EXPOSE(d) in the ruby container image (even if you don't know to which host port that EXPOSE port was mapped)
A direct link container-to-container can use the port mentioned in the dockerfile.
For a particular internal purpose I would like to send people back to a url on their own machine, how exactly would I do this? I can't really do
server {
server_name www.yayaya.com;
rewrite ^(.*) localhost:3000$1 permanent;
}
because that will point to the server's localhost, right?
either it does the lookup on the server and converts "localhost" to "127.0.0.1"
or it hands "localhost" to the client, which will still convert it to "127.0.0.1"
So either way, the client should be redirected to 127.0.0.1, which should be
correct.
I'm not an expert on nginx but I don't see why your example wouldn't work.