nginx 301 drops port forwarded by vagrant - redirect

I have a vagrant vm running nginx on port 80. My host machine forwards port 8080 to the vagrant vm's port 80.
I need to rewrite a url with a 301 redirect, which works, but the port I use to access nginx through the tunnel (8080) is dropped and the redirect fails.
http://server.com:8080/blog/two
-becomes-
http://server.com/blog.php?article=two
- it should be -
http://server.com:8080/blog.php?article=two
example:
rewrite ^/blog/(.*)$ /blog.php?article=$1 last;
Thanks!

Extract the original port number from the Host header field:
set $port '';
if ($http_host ~ :(\d+)$) {
set $port :$1;
}
rewrite ^/blog/(.*)$ http://example.com$port/blog.php?article=$1;

Related

What is '_' in Nginx listen option?

What is _ in nginx listen option?
I read listen 80 _; in nginx cookbook.
I found server_name _; mean just invalid server_name.
Then what is in listen?
ref https://nginx.org/en/docs/http/ngx_http_core_module.html#server
listen 443 ssl : makes nginx listen on all ipv4 address on the server, on port 443 (0.0.0.0:443)
while
listen [::]:443 ssl : makes nginx listen on all ipv6 address on the server, on port 443 (:::443)
[::]:443 will not make nginx respond on ipv4 by default, unless you specify parameter ipv6only=off :
listen [::]:443 ipv6only=off;
ssl :
The ssl parameter (0.7.14) allows specifying that all connections accepted on this port should work in SSL mode.
http2 :
The http2 parameter (1.9.5) configures the port to accept HTTP/2 connections.
This doesn't mean it accepts only HTTP/2 connections.

telnet: connect to address 192.168.33.x: Connection refused - over Vagrant centos machine

I have created a centos machine and installed nexus service over it.
Nexus service is running on 8081 port which i have opened from the vagrant file using below command inside the vagrant file.
machine1.vm.network "private_network", ip: "192.168.33.x"
machine1.vm.network "forwarded_port", guest: 80, host: 80
machine1.vm.network "forwarded_port", guest: 8080, host: 8080
machine1.vm.network "forwarded_port", guest: 8081, host: 8081
The nexus service is running fine on the centos machine but the telnet to the port from the same server as well as server from its network is failing. The port is not reachable from the host windows machine as well.
The server IP is reachable from its network machines, here all 3 network machines are created from vagrant file
I have tried to see and confirm the nexus service is actually running on 8081 port, and its running
I have tried to open a port 8081 to ensure firewall is not blocking using below command
iptables -A INPUT -p tcp -m tcp --dport 8081 -j ACCEPT
I have browsed through multiple forum to see if any solution works, I acknowledge this is very generic error even i have faced in past, but in this case not able to identify the root cause. I doubt if its related to vagrant specific configuration
Also, i tried to curl the service from centos server and host server, it doesnt work:
]$ curl http://localhost:8081
curl: (7) Failed connect to localhost:8081; Connection refused
netstat command doesnt give any result:
netstat -an|grep 8081
[vagrant#master1 bin]$
however the nexus service is up and running on the server with the same port
Here is vagrant file code
config.vm.define "machine1" do |machine1|
machine1.vm.provider "virtualbox" do |host|
host.memory = "2048"
host.cpus = 1
end
machine1.vm.hostname = "machine1"
machine1.vm.network "private_network", ip: "192.168.33.x3"
machine1.vm.network "forwarded_port", guest: 80, host: 80
machine1.vm.network "forwarded_port", guest: 8080, host: 8080
machine1.vm.network "forwarded_port", guest: 8081, host: 8081
machine1.vm.synced_folder "../data", "/data"
end
config.vm.define "machine2" do |machine2|
machine2.vm.provider "virtualbox" do |host|
host.memory = "2048"
host.cpus = 1
end
machine2.vm.hostname = "machine2"
machine2.vm.box = "generic/ubuntu1804"
machine2.vm.box_check_update = false
machine2.vm.network "private_network", ip: "192.168.33.x2"
machine2.vm.network "forwarded_port", guest: 80, host: 85
machine2.vm.network "forwarded_port", guest: 8080, host: 8085
machine2.vm.network "forwarded_port", guest: 8081, host: 8090
end
config.vm.define "master" do |master|
master.vm.provider "virtualbox" do |hosts|
hosts.memory = "2048"
hosts.cpus = 2
end
master.vm.hostname = "master"
master.vm.network "private_network", ip: "192.168.33.x1"
end
end
As the nexus service is running on port 8081, i should be able to access the service from my host machine using http://localhost:8081.
The issue is most likely the Vagrant networking as you guessed. If you just want to access the nexus service running on guest from the host, perhaps this can be useful.
To workaround, you may try to make the Vagrant box available on public network and then access it using the public IP and for that, you will have to enable config.vm.network "public_network" in your Vagrant file and then just do a vagrant reload. Once done, try accessing http://public_IP_of_guest:8081
Please let me know how it goes.
This may have many sources cause. In my case, I use vagrant fedora boxe.
I tried:
First using the private_network that I attached to a host only adapter and launched httpd service to test the connection between guest and host
config.vm.network "private_network", type: "dhcp", name: "vboxnet2"
config.vm.network "forwarded_port", guest:80, host:7070
but I was not able to ping my guest machine from the host and could no telnet the httpd service opened
Second using public_network and launched httpd service to test connectivity
config.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)", use_dhcp_assigned_default_route: true
I could ping my guest from my host but I could not telnet the httpd service.
For this two use case, the issue was that the port 80 on the fedora guest host was blocked by the firewall. Here is what fixed the issue and get all working for both privat_network and public_ntwork:
firewall-cmd --permanent --add-port 80/tcp #open the port permanently
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --list-port # list to check if the port was opened
systemctl stop firewalld # stop and open the firewall service
systemctl start firewalld

How can I close haproxy frontend connections coming from unknown hosts?

Now I am using nginx to close connections from unknown hosts and return 444 "no response"
How do I achieve the same with haproxy which is in front of nginx (saving the extra step between haproxy and nginx)
current nginx config:
server {
# Close connection for unrecognized hosts (444 no response)
listen 80 default_server;
listen [::]:80 default_server;
return 444;
}
This can be achieved using "silent-drop"
acl host_example req.hdr(host) -i example.com
http-request silent-drop if not host_example
https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#4.2-http-request%20silent-drop
https://www.haproxy.com/blog/introduction-to-haproxy-acls/#using-acls-to-block-requests
Ejez you can either accept connections coming from known ip's are block connections of particular ip's in frontend of haproxy.
ref code:
allowed known ip's
acl network_allowed src 20.30.40.50 20.30.40.40
use_backend allowed_backend if network_allowed
or
block certain ip's only
acl is-blocked-ip src 192.0.2.11 192.0.2.12 192.0.2.18
http-request deny if is-blocked-ip
ref:
1.https://blog.sleeplessbeastie.eu/2018/03/26/how-to-block-particular-ip-addresses-on-haproxy/
2.https://raymii.org/s/snippets/haproxy_restrict_specific_urls_to_specific_ip_addresses.html

Prevent public IP address binding on Kubernetes single master/node set-up

I'm following the instructions here to spin up a single node master kubernetes install. And then planning to make a website hosted within it available via an nginx ingress controller hosted directly on the internet (on a physical server, not GCE, AWS or other cloud).
Set-up works as expected and I can hit the load balancer and flow through the ingress to the target echoheaders instance, get my output and everything looks great. Good stuff.
The trouble comes when I portscan the server's public internet IP and see all these open ports besides the ingress port (80).
Open TCP Port: 80 http
Open TCP Port: 4194
Open TCP Port: 6443
Open TCP Port: 8081
Open TCP Port: 10250
Open TCP Port: 10251
Open TCP Port: 10252
Open TCP Port: 10255
Open TCP Port: 38654
Open TCP Port: 38700
Open TCP Port: 39055
Open TCP Port: 39056
Open TCP Port: 44667
All of the extra ports correspond to cadvisor, skydns and the various echo headers and nginx instances, which for security reasons should not be bound to the public IP address of the server. All of these are being injected into the host's KUBE-PORTALS-HOST iptable with bindings to the server's public IP by kube-proxy.
How can I get hypercube to tell kube-proxy to only bind to docker IP (172.x) or private cluster IP (10.x) addresses?
You should be able to set the bind address on kube-proxy (http://kubernetes.io/docs/admin/kube-proxy/):
--bind-address=0.0.0.0: The IP address for the proxy server to serve on (set to 0.0.0.0 for all interfaces)

Redirecting requests over 8443 to 443

One of our applications was previously configured to serve SSL from tomcat over port 8443. We're migrating this application to a new environment and switching to using nginx to handle SSL termination rather than tomcat (which will operate over 8080). I would like the ability for folks to be able to connect to the new environment over 8443 but get redirected to 443 (to support anyone's old bookmarks or links).
Currently have rulesets to redirect 80 to 443, and a full ssl_certificate set defined for listening on 443, but no luck trying a variety of methods to listen on 8443 and redirect to itself over 443.
Any suggestions?
Just define a separate server for port 8443, and do a redirect from there. You'd obviously still have to have a proper certificate for your 8443 server, too.
server {
listen 8443 ssl;
server_name example.com;
ssl_...;
return 301 https://example.com$request_uri;
}