nginx lua not able to set headers before rewrite - redirect

I have a set of ip address in my redis server to be blocked. Now when a client makes a request,
the request must be intercepted by nginx
check if the remote_addr belongs to the blocked ip
add a header if the ip is blocked
then redirect to the actual ip address with the request_uri.
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
lua_shared_dict ip_status 1m;
server {
listen 9080;
server_name localhost;
location ~ .* {
rewrite_by_lua_file src/ip_check.lua;
}
}
}
src/ip_check.lua
-- redis configuration
local redis_host = "127.0.0.1"
local redis_port = 6379 -- connection timeouts for redis in ms.
local redis_max_idle_timeout = 10000
local redis_pool_size = 2 -- don't set this too high!
local redis_timeout = 200
-- check a set with this key for blacklist entries
local redis_key = ngx.var.remote_addr
local ip_status = ngx.shared.ip_status
local status_unblocked = "0"
local status_blocked = "1"
-- cache lookups for this many seconds
local cache_ttl = 1800
local redirect_host = "http://192.168.12.103:8080/Spring4MVCHelloWorld1"
local header_ip_status_key = "Ip-Status"
-- value of the header to be sent when the client ip address is blocked
local header_ip_status_value = "block"
local add_header = status_unblocked
-- lookup the value in the cache
local cache_result = ip_status:get(redis_key)
if cache_result then
if cache_result == status_blocked then
add_header = status_blocked
end
else
-- lookup against redis
local resty = require "resty.redis"
local redis = resty:new()
redis:set_timeout(redis_timeout)
local connected, err = redis:connect(redis_host, redis_port)
if not connected then
ngx.log(ngx.ERR, "ip_check: could not connect to redis #"..redis_host..":"..redis_port.." - "..err)
else
ngx.log(ngx.ALERT, "ip_check: found connect to redis #"..redis_host..":"..redis_port.." - successful")
local result, err = redis:get(redis_key)
if not result then
ngx.log(ngx.ERR, "ip_check: lookup failed for "..ngx.var.remote_addr.." - "..err)
else
if result == status_blocked then
add_header = status_blocked
end
-- cache the result from redis
ip_status:set(ip, add_header, cache_ttl)
end
redis:set_keepalive(redis_max_idle_timeout, redis_pool_size)
end
end
ngx.log(ngx.ALERT, "ip_check: "..header_ip_status_key.." of "..ngx.var.remote_addr.." is "..add_header)
if add_header == status_blocked then
ngx.header[header_ip_status_key] = header_ip_status_value
ngx.req.set_header(header_ip_status_key, header_ip_status_value)
end
ngx.redirect(redirect_host..ngx.var.request_uri)
For testing purpose, i added 127.0.0.1 key to redis with value 1. So, the redirect uri should be hit with the additional header. The problem i'm facing is no matter what i use ngx.header or ngx.req.set_header the Ip-Status header is not sent to the redirect request and the end api does not recieve it.
For example if i hit http://localhost:9080/hello in the browser,
Request headers
Host:"localhost:9080"
User-Agent:"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0"
Accept:"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
Accept-Language:"en-US,en;q=0.5"
Accept-Encoding:"gzip, deflate"
Connection:"keep-alive"
Response headers
Connection:"keep-alive"
Content-Length:"166"
Content-Type:"text/html"
Date:"Thu, 05 May 2016 08:06:33 GMT"
Location:"http://192.168.12.103:8080/Spring4MVCHelloWorld1/hello"
Server:"openresty/1.9.7.2"
ip-status:"block"
The redirected uri is http://localhost:9080/hello,
Request headers
Host:"192.168.12.103:8080"
User-Agent:"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0"
Accept:"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
Accept-Language:"en-US,en;q=0.5"
Accept-Encoding:"gzip, deflate"
Cookie:"JSESSIONID=4834843FE0E76170E429028E096A66E5"
Connection:"keep-alive"
Response headers
Content-Language:"en-US"
Content-Length:"166"
Content-Type:"text/html;charset=UTF-8"
Date:"Thu, 05 May 2016 08:06:33 GMT"
Server:"Apache-Coyote/1.1"
I'm able to see Ip-Status header in response headers of the original request but not in the request headers of the redirected uri. Any help as to how to send header to the redirected uri will be very helpful.
I'm new to nginx and lua. Asking because i couldn't find any corresponding questions apologies if the question is already asked.

It is the browser that is redirecting to the redirected uri and nginx do not have control on the headers. So, i removed
ngx.redirect(redirect_host..ngx.var.request_uri)
from the src/ip_check.lua and changed nginx.conf to make a proxy call and i was able to observe that the api was able to receive the additional header.
Modified nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
lua_shared_dict ip_status 1m;
server {
listen 9080;
server_name localhost;
location ~ .* {
set $backend_host "http://192.168.12.103:8080/Spring4MVCHelloWorld1";
access_by_lua_file src/ip_check.lua;
proxy_pass $backend_host$request_uri;
}
}
}
This modified nginx.conf will make a request to $backend_host$request_uri and browser will not have any knowledge of the redirections made. Hence, the headers set by ngx.req.set_header will be sent when making the proxy call. So,
ngx.header[header_ip_status_key] = header_ip_status_value
can also be removed from src/ip_check.lua.

Related

Keycloack: User is able to access a realm by logging to another

I have an nginx/openresty client to a keycloack server for authorization using openid.
I am using lua-resty-openidc to allow access to services behind the proxy.
I have created two clients at two different realms for different services.
The problem is that after a user gets authenticated at the first realm on e.g. https://<my-server>/auth/realms/<realm1>/protocol/openid-connect/auth?response_type=code&client_id=openresty&state=..........., he is able to directly access the other service at realm2 as well.
What is going on here? How can I ensure that the user will only be able to access the client at the realm he authenticated against?
How can I ensure that after logout the user will no longer be able to get access until he logs in anew?
[Edit-details]
My nginx.conf for the two services is bellow.
The user first accesses https://<my-server>/service_1/
and is redirected to keycloack to give his password for realm1. He provides it and is able to access service_1.
However, if after that he tries to access https://<my-server>/service_2/, he no longer has to authenticate, but can login although the service_2 is about a client on a different realm, with a different client_secret!
.....
location /service_1/ {
access_by_lua_block {
local opts = {
redirect_uri_path = "/service_1/auth", -- we are send here after auth
discovery = "https://<my-server>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
client_id = "openresty",
client_secret = "<client1-secret>",
session_contents = {id_token=true} -- this is essential for safari!
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
# I disabled caching so the browser won't cache the site.
expires 0;
add_header Cache-Control private;
proxy_pass http://<server-for-service1>:port1/foo/;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
..........................
location /service_2/ {
access_by_lua_block {
local opts = {
redirect_uri_path = "/service_2/auth", -- we are send here after auth
discovery = "https://<my-server>/keycloak/auth/realms/realm2/.well-known/openid-configuration",
client_id = "openresty",
client_secret = "client2-secret",
session_contents = {id_token=true} -- this is essential for safari!
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
# I disabled caching so the browser won't cache the site.
expires 0;
add_header Cache-Control private;
proxy_pass http://<server-for-service2>:port2/bar/;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
[Edit-details 2]
I am using lua resty openidc version 1.7.2, but everything I write should stand for 1.7.4 based on the diff of the two version's code.
I can clearly see from the debug level logs that the session is created during the first access, and then reused on the second realm, which is wrong, as the second access still has a token for the first realm... Here is what the authorization for realm2 looks like...:
2021/04/28 12:56:41 [debug] 2615#2615: *4617979 [lua] openidc.lua:1414: authenticate(): session.present=true, session.data.id_token=true, session.data.authenticated=true, opts.force_reauthorize=nil, opts.renew_access_token_on_expiry=nil, try_to_renew=true, token_expired=false
2021/04/28 12:56:41 [debug] 2615#2615: *4617979 [lua] openidc.lua:1470: authenticate(): id_token={"azp":"realm1","typ":"ID","iat":1619614598,"iss":"https:\/\/<myserver>\/keycloak\/auth\/realms\/realm1","aud":"realm1","nonce":"8c8ca2c4df2...b26"
,"jti":"1c028c65-...0994f","session_state":"0e1241e3-66fd-4ca1-a0dd-c0d1a6a5c708","email_verified":false,"sub":"25303e44-...e2c1757ae857","acr":"1","preferred_username":"logoutuser","auth_time":1619614598,"exp":1619614898,"at_hash":"5BNT...j414r72LU6g"}
Ok, this took me some time. It may also be that most tutorials out there leave this vulnerability (only in a setup where a single nginx is using multiple realms) of having authentication access one realm will allow authentication access to any other.
a typical autentication call from tutorials out there is:
location /service1/ {
access_by_lua_block {
local opts = {
redirect_uri_path = "/realm1/authenticated",
discovery = "https://<myserver>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
client_id = "client1",
client_secret = <........>,
session_contents = {id_token=true} -- this is essential for safari!
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
# I disbled caching so the browser won't cache the site.
expires 0;
add_header Cache-Control private;
proxy_pass http://realm1-server:port/service1/;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /service2/ {
<same for ream2>
}
There seem actually to be two issues
We do not check for realm id (this is a vulnerability)
Sessions for the two realms are cached interchangeably (this would lead to a situation where if we fix (1), we would now be allowed to access only one realm and have to logout from realm1 to access realm2)
Solutions:
1)We need to explicitly check that the realm is correct
2)We should use one session table per realm (notice that although this would appear to solve (1) as well, it does not if an attacker mix and maches session ids with his "special" browser -- at least i think)
For no2 there was no documentation, I had to read the code of openidc.lua and from there the code for the library this library uses (session.lua)
The changes are as follows:
location /service1/ {
access_by_lua_block {
local opts = {
redirect_uri_path = "/realm1/authenticated",
discovery = "https://<myserver>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
client_id = "client1",
client_secret = <........>,
session_contents = {id_token=true} -- this is essential for safari!
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts,nil,nil,{name=opts.client_id})
if (err or ( res.id_token.azp ~= opts.client_id ) ) then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
<..................no changes here................>
}
location /service2/ {
<same for ream2>
}

why nginx proxy_pass to sub directory is not working?

I am trying to do a proxy_pass to a subdirectory http://ms.server.com:8085/ms. So whenever anyone hit on http://ms.example.com' it should be redirected tohttp://ms.example.com/ms`. I am trying to do that through below configuration
upstream example {
server ms.server.com:8085;
}
server {
server_name ms.example.com;
location / {
proxy_pass http://example/ms;
}
}
Now I am redirecting to "Nginx Test page".
proxy_pass is used to inform Nginx where to send proxied requests at a host level, and does not account for the URI.
What you want to use instead is a return statement, like so:
upstream example {
server ms.server.com:8085;
}
server {
server_name ms.example.com;
location = / {
return 301 http://ms.example.com/ms;
}
location / {
proxy_pass http://example;
}
}
This will redirect requests to the root URL of the server_name http://ms.example.com using a 301 redirect, and all other traffic will be passed through to the defined upstream.

Nginx Caching for Rest API

I have created a Spring Boot project. I have to cache one Rest API call:
GET localhost:8080/parts
For that I have used nginx. But it is not working : every time I call the API, that call is going to backend. My configuration file is given below.
/usr/local/etc/nginx/nginx.conf
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
error_page 500 502 503 504 /50x.html;
proxy_cache_path /var/log/oms levels=1:2 keys_zone=webcache:10m inactive=1d max_size=2000m;
proxy_cache_min_uses 1;
#upstream backend_server {
# server localhost:8080;
#}
server {
listen 80;
server_name localhost;
location /parts {
proxy_pass http://localhost:8080/parts;
proxy_cache webcache;
}
#location /{
# proxy_pass http://localhost:8080;
# proxy_cache webcache;
#}
}
include servers/*;
}

domain to www.domain redirection

I have an nginx server behind varnish and I am trying to do a domain to www.domain redirection .
I tried using these rules on nginx
rewrite ^(.*) http://www.domain.com$1 permanent;
return 301 ^ $scheme://www.domain.com$request_uri;
return 301 http://www.domain.com$request_uri;
But I get an error in chrome as website running into a redirection loop.
As the about solution did not work I tried an alternative writing rules in varnish
sub vcl_recv {
// ...
if ( req.http.host == "domain.com" ) {
error 750 "http://www." + req.http.host + req.url;
}
// ...
}
sub vcl_error {
// ...
if (obj.status == 750) {
set obj.http.Location = obj.response;
# Set HTTP 301 for permanent redirect
set obj.status = 301;
return(deliver);
}
// ...
}
I am using varnish 4 and I get an error that varnish can't compile the code.
Message from VCC-compiler:
Expected an action, 'if', '{' or '}'
('input' Line 29 Pos 3)
error 750 regsub(req.http.host, "^www\.(.*)", "http://\1");
--#####------------------------------------------------------------------------------------------------------------------------
Could some one please help me in fixing this?
My server block is a follows:
server {
listen 127.0.0.1:8080;
root /home/webadmin/html/livesite;
index index.php index.html index.htm;
server_name www.domain.com;
# rewrite ^(.*) http://www.domain.com$1 permanent;
# return 301 ^ $scheme://www.domain.com$request_uri;
# return 301 http://www.domain.com$request_uri;
location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}
error_page 404 403 /error/eror.html;
location = /error/error.html {
root /home/webadmin/html/livesite;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /home/webadmin/html/livesite;
}
#pass the PHP scripts to FastCGI server listening on 127.0.0.1:9$
location ~ \.php$ {
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
}
This would be a correct way to do it while preserving the uri and the query string,
return 301 $scheme://www.domain.com$request_uri$is_args$query_string;
The problem isn't in this part, the problem is where you are redirecting to, probably redirecting to another locations that also does another redirect, I would guess you don't have another server block to handle the www server separately, so you keep redirecting to the same place over and over, wouldn't know for sure till you post the rest of the config.
EDIT:
The issue like I said is that you are redirecting the www to the www server, to avoid that you should create a new server that isn't with the www to do the redirection
server { # the redirecting server
listen 8080; # according to your config
server_name domain.com; #without www
return 301 $scheme://www.domain.com$request_uri$is_args$query_string;
}
server { # the actual serving server
listen 8080;
server_name www.domain.com;
# the rest of your actual settings
}
Use varnish for this instead, and skip the apache rewrites.
This example is to always redirect to https, but the semantics is the same..
In sub vcl_recv:
if ( req.http.host ~ "^(?i)domain.com" )
{
set req.http.X-Redir-Url = "https://domain.com" + req.url;
error 750 req.http.x-Redir-Url;
}
and then in sub vcl_error:
if (obj.status == 750) {
set obj.http.Location = obj.response;
set obj.status = 302;
return (deliver);
}

Return custom 403 error page with nginx

Im trying to display the error page in /temp/www/error403.html whenever a 403 error occurs.
This should be whenever a user tries to access the site via https (ssl) and it's IP is in the blovkips.conf file, but at the moment it still shows nginx's default error page.
I have the same code for my other server (without any blocking) and it works.
Is it blocking the IP from accessing the custom 403 page?
If so how do I get it to work?
server {
# ssl
listen 443;
ssl on;
ssl_certificate /etc/nginx/ssl/site.in.crt;
ssl_certificate_key /etc/nginx/ssl/site.in.key;
keepalive_timeout 70;
server_name localhost;
location / {
root /temp/www;
index index.html index.htm;
}
# redirect server error pages to the static page
error_page 403 /error403.html;
# location = /error403.html {
# root /temp/www;
# }
# add trailing slash if missing
if (-f $document_root/$host$uri) {
rewrite ^(.*[^/])$ $1/ permanent;
}
# list of IPs to block
include blockips.conf;
}
Edit:
Corrected error_page code from 504 to 403 but I still have the same issue
I did heaps of googling before coming here but did some more just now, within 5 minutes I had my answer :P
Seems I'm not the only person to have this issue:
error_page 403 /e403.html;
location = /e403.html {
root html;
allow all;
}
http://www.cyberciti.biz/faq/unix-linux-nginx-custom-error-403-page-configuration/
Seems that I was right in thinking that access to my error page was getting blocked.
The problem might be that you're trying to server a 403 "Forbidden" error from a webserver that they are forbidden from accessing. Nginx treats the error_page directive as an internal redirect. So it is trying to server https://example.com/error403.html which is also forbidden.
So you need to make the error page not served out of https like this:
error_page 403 http://example.com/error403.html
or add the necessary "access allowed" options to the location for the error page path. The way to test this is to access the /error403.html page directly. If you can't accesses that way, it isn't going to work when someone gets an actual 403 error.
I had the same issue... The point is that i've implemented ip whitelist at server context level (or vhost level if you prefer), so every locations will have this as well (basicaly /403.html won't be accessible) :
server {
listen *:443 ssl;
server_name mydomain.com ;
error_page 403 /403.html;
.....
if ($exclusion = 0) { return 403; } #implemented in another conf.d files (see below)
location ~ \.php$ {
root /var/www/vhosts/mydomain.com/httpdocs;
include /etc/nginx/fastcgi_par
fastcgi_pass 127.0.0.1:9000;
fastcgi_connect_timeout 3m;
fastcgi_read_timeout 3m;
fastcgi_send_timeout 3m;
}
location /403.html {
root /usr/share/nginx/html;
allow all;
}
...
}
Exclusion conf.d file sample:
geo $exclusion {
default 0;
10.0.0.0/8 Local network
80.23.120.23 Some_ip
...
}
To fix that simply do your return 403 at location level (context):
server {
listen *:443 ssl;
server_name mydomain.com ;
error_page 403 /403.html;
.....
location ~ \.php$ {
if ($exclusion = 0) { return 403; }
root /var/www/vhosts/mydomain.com/httpdocs;
include /etc/nginx/fastcgi_par
fastcgi_pass 127.0.0.1:9000;
fastcgi_connect_timeout 3m;
fastcgi_read_timeout 3m;
fastcgi_send_timeout 3m;
}
location /403.html {
root /usr/share/nginx/html;
allow all;
}
...
}
Works for me.
It looks like there's a boo-boo in the listed configuration, as it is only sending error code 503 ("service unavailable") to the custom page, so for 403 ("forbidden") you probably want to use:
error_page 403 /error403.html