Nginx redirect to an external URL - redirect

What I'm trying to do is route all requests to /rdr/extern_url to redirect to extern_url through my web server instead of doing it through PHP.
location /rdr {
rewrite ^/rdr/(.*)$ $1 permanent;
}
What's wrong here is it if I access http://localhost/rdr/http://google.com my browser is telling me:
Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects.
How do I redirect properly?

Trivial check:
$ curl -si 'http://localhost/rdr/http://www.google.com' | head -8
HTTP/1.1 301 Moved Permanently
Server: nginx/1.2.0
Date: Sun, 05 Aug 2012 09:33:14 GMT
Content-Type: text/html
Content-Length: 184
Connection: keep-alive
Location: http:/www.google.com
As you can see, there is only one slash after scheme in Location.
After adding the following directive to server:
merge_slashes off;
We'll get the correct reply:
$ curl -si 'http://localhost/rdr/http://www.google.com' | head -8
HTTP/1.1 301 Moved Permanently
Server: nginx/1.2.0
Date: Sun, 05 Aug 2012 09:36:56 GMT
Content-Type: text/html
Content-Length: 184
Connection: keep-alive
Location: http://www.google.com
It becomes clear from the comments you may want to pass hostname without the schema to your redirecting service. To solve this problem you need to define two locations to process both cases separately:
server {
listen 80;
server_name localhost;
merge_slashes off;
location /rdr {
location /rdr/http:// {
rewrite ^/rdr/(.*)$ $1 permanent;
}
rewrite ^/rdr/(.*)$ http://$1 permanent;
}
}
Here I've defined /rdr/http:// as a sub-location of /rdr just to keep the redirector service in one block -- it's perfectly valid to create both locations at server-level.

Related

HTTP Redirect giving the same url (original) as Location header

I am trying to fetch data from a website using sockets and I am getting a redirect but the redirect is same as the previous url
The below code works perfectly
import requests
r = requests.get('https://links.papareact.com/f90',
allow_redirects=False)
print(r.status_code)
print(r.headers["location"])
Here is the output Location header is new url
301
http://pngimg.com/uploads/amazon/amazon_PNG11.png
Here is the socket code which behaves weird
import socket
HOST = "links.papareact.com"
PORT = 80
path = "f90"
headers = f"GET /{path} HTTP/1.1\r\n" + \
f"Host: {HOST}\r\n\r\n"
connection = (HOST, PORT)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(connection)
s.send(headers.encode())
while True:
data = s.recv(4096).decode().strip()
if data.endswith("\r\n\r\n") or not data:
break
print(data)
Output
HTTP/1.1 301 Moved Permanently
Date: Tue, 17 Aug 2021 09:15:33 GMT
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: max-age=3600
Expires: Tue, 17 Aug 2021 10:15:33 GMT
Location: https://links.papareact.com/f90
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=0ptwEG6zbfCPDGYczBruC%2FNuMmmsfwqSd6emUpu2aRIa9JtNvIpV3rcWZjfdMrP7EV9EM94UxTx4XbEk4P6KBk4PIb%2BLxPrwitq1Fo10u%2FtGnJnCFqFFh8XGutpJsIy13zCaUYGf"}],"group":"cf-nel","max_age":604800}
NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Server: cloudflare
CF-RAY: 6801cc6c5d301d14-BLR
alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400, h3=":443"; ma=86400
Here the Location Header is same as the previous url
Please explain why is this happening and a possible solution to get the expected result ? :(
Here is the socket code which behaves weird
Nothing weird here. The redirect is according to the location header to https:// (encrypted, port 443) while your original request was for http:// (not encrypted, port 80).
This is a pretty common behavior of web sites that they redirect a plain HTTP request to the same path with HTTPS. If you then access this new (HTTPS) location you would likely get the same redirect as you did with your requests.get('https://..., i.e. to http://pngimg.com/uploads/amazon/amazon_PNG11.png.

NGINX redirects to http

I am a newbie to NGINX and have been trying to get this problem sorted out.
Here is the NGINX configuration that works pretty well for most of the part, however when a request is placed without the trailing slash at the end, it redirects to http://$host instead of https://$host. It was forwarding with the port earlier, but I turned off port_in_redirect, which disabled showing up the port number in the browser.
Somehow https://domain.com/xyz/abc still gets redirected to http://domain.com/xyz/abc/
My guess is try_files is somehow not retaining the domain name
I am sure there is something wrong in the configuration, but I have no deep insights on whats causing it
Any inputs is highly appreciate
server {
listen 8080;
server_name _;
location /xyz/abc {
alias /var/www/html/;
try_files $uri $uri/ /xyz/abc/index.html;
}
location ~ ^/xyz/foo/(.*) {
return 301 https://$host/xyz/abc/foo/$1;
}
Below is the curl output
curl -I https://domain.com/xyz/abc
HTTP/1.1 301 Moved Permanently
Server: nginx/1.6.3
Date: Thu, 27 Jan 2016 08:18:27 GMT
Content-Type: text/html
Content-Length: 184
Location: http://domain.com/xyz/abc/
Connection: keep-alive

HAProxy 1.4: how to replace X-Forwarded-For with custom IP

I have an HAProxy 1.4 server behind an AWS ELB. Logically, the ELB sends the users IP in the X-Forwarded-For header. My app reads that header and behaves differently based on the IP (country).
I want to test that behavior overriding the X-Forwarded-For with custom IPs, but the AWS ELB appends my custom value with my current IP (X-Forwarded-For: 1.2.3.4, 200.1.130.2)
What I have been trying to do is to send another custom header X-Force-IP and once it gets into HAproxy, delete X-Forwarded-For headers and use reqirep to change the name X-Force-IP to X-Forwarded-For
This is how my config chunk looks like
acl custom-ip hdr_cnt(X-Force-IP) 1
reqidel ^X-Forwarded-For:.* if custom-ip
reqrep X-Force-IP X-Forwarded-For if custom-ip
but when it gets into my app, the app server (lighttpd) rejects it with "HTTP 400 Bad Request" as if it were malformed.
[ec2-user#haproxy-stage]$ curl -I -H "X-Forwarded-For: 123.456.7.12" "http://www.example.com"
HTTP/1.1 200 OK
Set-Cookie: PHPSESSID=mcs0tqlsg31haiavqopdvm02i6; path=/; domain=www.example.com
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Date: Sun, 11 Jan 2015 02:57:34 GMT
Server: beta
[ec2-user#haproxy-stage]$ curl -I -H "X-Forwarded-For: 123.456.7.12" -H "X-Force-IP: 321.456.7.12" "http://www.example.com"
HTTP/1.1 400 Bad Request
Content-Type: text/html
Content-Length: 349
Date: Sun, 11 Jan 2015 02:57:44 GMT
Server: beta
From the previous it looks like the ACL is working.
I checked with tcpdump in the app server and it seems that it has deleted the X-Forwarded-For header but also deleted the X-Force-IP instead of replacing it.
[ec2-user#beta ~]# sudo tcpdump -A -s 20240 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' | egrep --line-buffered "^........(GET |HTTP\/|POST |HEAD )|^[A-Za-z0-9-]+: " | sed -r 's/^........(GET |HTTP\/|POST |HEAD )/\n\1/g'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 20240 bytes
GET / HTTP/1.1
User-Agent: curl/7.38.0
Host: www.example.com
Accept: */*
Connection: close
HTTP/1.1 400 Bad Request
Content-Type: text/html
Content-Length: 349
Connection: close
Date: Sun, 11 Jan 2015 02:56:50 GMT
Server: beta
The previous was with the X-Force-IP, and the following without it:
GET / HTTP/1.1
User-Agent: curl/7.38.0
Host: www.example.com
Accept: */*
X-Forwarded-For: 123.456.7.12
Connection: close
HTTP/1.1 200 OK
X-Powered-By: PHP/5.3.4
Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Pragma: no-cache
Content-Type: text/html; charset=UTF-8
Connection: close
Transfer-Encoding: chunked
Date: Sun, 11 Jan 2015 02:57:02 GMT
Server: beta
^C71 packets captured
71 packets received by filter
0 packets dropped by kernel
Any help?
I was expecting to have "X-Force-IP: 321.456.7.12" converted into "X-Forwarded-For: 321.456.7.12"
Thanks!
Ignacio
The regex matching provided here doesn't do simple substitution. It's quite a bit more powerful, and has to be used accordingly.
reqrep ^X-Force-IP:(.*) X-Forwarded-For:\1 if custom-ip
The reqrep (case sensitive request regex replace) and reqirep (case insensitive request regex replace) directives operate at the individual request header level, replacing the header name and its value with the 2nd argument, if the 1st argument matches... so if there's information you want to preserve (such as the value) you need one or more capture groups, such as (.*), in the 1st arg, and a placeholder \1 in the 2nd arg, in order to do the preserve the data.
Your current configuration does indeed invalidate the request, by creating a malformed/incomplete header line.
Also, you should anchor the pattern to the left side of the header name with ^. Otherwise, the expression could match more headers than you expect.

nginx static index redirect

This seems ridiculous but I've not found a working answer in over an hour of searching.
I have a static website running off nginx (which happens to be behind Varnish). The index file is called index.html. I want to redirect anyone who actually visits the URL mydomain.com/index.html back to mydomain.com.
Here is my nginx config for the site:
server {
listen 8080;
server_name www.mydomain.com;
port_in_redirect off;
location / {
root /usr/share/nginx/www.mydomain.com/public;
index index.html;
}
rewrite /index.html http://www.mydomain.com/ permanent;
}
http://www.mydomain.com/index.html responds as expected with a 301 with the location http://www.mydomain.com/ but unfortunately http://www.mydomain.com/ also serves a 301 back to itself so we get a redirect loop.
How can I tell nginx to only serve the 301 if index.html is literally in the request?
Add a new location block to handle your homepage, and use try_files directive (instead of "index index.html;") to look for the index.html file directly. Note that try_files requires you to enter at least 2 choices. So I put the same file twice.
location = / {
root /usr/share/nginx/www.mydomain.com/public;
try_files /index.html /index.html;
}
Looks good based on my experiment:
curl -iL http://www.mydomain.com/index.html
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Sat, 16 Mar 2013 09:07:27 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: http://www.mydomain.com/
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 16 Mar 2013 09:07:27 GMT
Content-Type: text/html
Content-Length: 4
Last-Modified: Sat, 16 Mar 2013 08:05:47 GMT
Connection: keep-alive
Accept-Ranges: bytes
[UPDATE]
The root cause of the redirect loop is the 'index' directive, which triggers nginx to do another round of location match again. That's how the rewrite rule outside the location block gets executed again, causing the loop. So the 'index' directive is like a "rewrite...last;" directive. You don't want that in your case.
The trick is to not trigger another location match again. try_files can do that efficiently. That's why I picked it in my original answer. However, if you like, another simple fix is to replace
index index.html;
by
rewrite ^/$ /index.html break;
inside your original "location /" block. This 'rewrite...break;' directive will keep nginx stay inside the same location block, effectively stop the loop. However, the side effect of this approach is that you lose the functionality provided by 'index' directive.
[UPDATE 2]
Actually, index directive executes after rewrite directive. So the following also works. Note that I just added the rewrite...break; line. If the request uri is "/", nginx finds the existing file /index.html from the rewrite rule first. So the index directive is never being triggered for this request. As a result, both directives can work together.
location / {
root /usr/share/nginx/www.mydomain.com/public;
index index.html;
rewrite ^/$ /index.html break;
}
Looks like you really don't want index.php to show up in the address bar, is that correct?
If you add a rewrite directive to the nginx config, you'll get a redirect loop, as you have experienced. If you are open to a javascript solution, you can place this anywhere in your index.html to silently rewrite the address bar:
<script>
history.pushState(null, '', '/');
</script>
For more information
Keep in mind that while most modern browsers support the history API, not all do (namely, most versions of IE).

Facebook links to my site resolve as 403 forbidden

Hi I'm experiencing a super weird problem.
Whenever I post links to my website on Facebook, they come up as Forbidden.
The site itself works great and I have no seen this when linking on other sites.
Could this be a server misconfiguration? Any thoughts on where to look?
here's some Info:
I have a dedicated server running WHM 11.25.0
i have 2 sites hosted here using cPanel 11.25.0
the error msg:
Forbidden You don't have
permission to access
/blog/deepwater-horizon-11/ on this
server. Additionally, a 404
Not Found error was encountered while
trying to use an ErrorDocument to
handle the request.
Apache/2.2.14 (Unix)
mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2
mod_auth_passthrough/2.1
mod_bwlimited/1.4 FrontPage/5.0.2.2635
Server at www.offshoreinjuries.com
Port 80
UPDATE:
Here is a sample link if it helps. (notice going the linked page directly works fine)
http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.offshoreinjuries.com%2Fblog%2Fdeepwater-horizon-11%2F&h=834ea
UPDATE and ANSWER:
Found the issue and added a complete answer below.
You must have a rule somewhere that reads the HTTP_REFERER and rejects incoming links from Facebook. Seriously. This is what happens between the lines:
No referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
HTTP/1.1 200 OK
Date: Fri, 28 May 2010 09:19:45 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.12
X-Pingback: http://www.offshoreinjuries.com/blog/xmlrpc.php
Content-Type: text/html; charset=UTF-8
200 OK, good.
Facebook referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
Referer: http://www.facebook.com/l.php?u=http%3A%2F%2Fwww.offshoreinjuries.com%2Fblog%2Fdeepwater-horizon-11%2F&h=834ea
HTTP/1.1 403 Forbidden
Date: Fri, 28 May 2010 09:21:04 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
Content-Type: text/html; charset=iso-8859-1
403 Forbidden, bad.
Any other referrer
telnet www.offshoreinjuries.com 80
HEAD /blog/deepwater-horizon-11/ HTTP/1.1
Host: www.offshoreinjuries.com
Referer: http://alvaro.es/
HTTP/1.1 200 OK
Date: Fri, 28 May 2010 09:20:36 GMT
Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8i DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635
X-Powered-By: PHP/5.2.12
X-Pingback: http://www.offshoreinjuries.com/blog/xmlrpc.php
Content-Type: text/html; charset=UTF-8
200 OK again.
Your server is actively rejecting visitors from Facebook.
I was finally able to get to the bottom of this behavior.
The default mod_security settings of my host, HostGator include a set of whitelists and blacklists. Upon inspecting these I found .facebook.com/l.php blacklisted.
l.php is a wrapper page that provides a warning that you are leaving facebook. As I understand it since this can be easily exploited, HostGator chose to essentially blacklist all outbound facebook links.
I fixed my problem by removing .facebook.com/l.php from the mod_security blacklist, however I could have also just reset my mod_security settings to Default (vs the HostGator config) via a single click in WHM.