Keycloack: User is able to access a realm by logging to another - keycloak

I have an nginx/openresty client to a keycloack server for authorization using openid.
I am using lua-resty-openidc to allow access to services behind the proxy.
I have created two clients at two different realms for different services.
The problem is that after a user gets authenticated at the first realm on e.g. https://<my-server>/auth/realms/<realm1>/protocol/openid-connect/auth?response_type=code&client_id=openresty&state=..........., he is able to directly access the other service at realm2 as well.
What is going on here? How can I ensure that the user will only be able to access the client at the realm he authenticated against?
How can I ensure that after logout the user will no longer be able to get access until he logs in anew?
[Edit-details]
My nginx.conf for the two services is bellow.
The user first accesses https://<my-server>/service_1/
and is redirected to keycloack to give his password for realm1. He provides it and is able to access service_1.
However, if after that he tries to access https://<my-server>/service_2/, he no longer has to authenticate, but can login although the service_2 is about a client on a different realm, with a different client_secret!
.....
location /service_1/ {
access_by_lua_block {
local opts = {
redirect_uri_path = "/service_1/auth", -- we are send here after auth
discovery = "https://<my-server>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
client_id = "openresty",
client_secret = "<client1-secret>",
session_contents = {id_token=true} -- this is essential for safari!
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
# I disabled caching so the browser won't cache the site.
expires 0;
add_header Cache-Control private;
proxy_pass http://<server-for-service1>:port1/foo/;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
..........................
location /service_2/ {
access_by_lua_block {
local opts = {
redirect_uri_path = "/service_2/auth", -- we are send here after auth
discovery = "https://<my-server>/keycloak/auth/realms/realm2/.well-known/openid-configuration",
client_id = "openresty",
client_secret = "client2-secret",
session_contents = {id_token=true} -- this is essential for safari!
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
# I disabled caching so the browser won't cache the site.
expires 0;
add_header Cache-Control private;
proxy_pass http://<server-for-service2>:port2/bar/;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
[Edit-details 2]
I am using lua resty openidc version 1.7.2, but everything I write should stand for 1.7.4 based on the diff of the two version's code.
I can clearly see from the debug level logs that the session is created during the first access, and then reused on the second realm, which is wrong, as the second access still has a token for the first realm... Here is what the authorization for realm2 looks like...:
2021/04/28 12:56:41 [debug] 2615#2615: *4617979 [lua] openidc.lua:1414: authenticate(): session.present=true, session.data.id_token=true, session.data.authenticated=true, opts.force_reauthorize=nil, opts.renew_access_token_on_expiry=nil, try_to_renew=true, token_expired=false
2021/04/28 12:56:41 [debug] 2615#2615: *4617979 [lua] openidc.lua:1470: authenticate(): id_token={"azp":"realm1","typ":"ID","iat":1619614598,"iss":"https:\/\/<myserver>\/keycloak\/auth\/realms\/realm1","aud":"realm1","nonce":"8c8ca2c4df2...b26"
,"jti":"1c028c65-...0994f","session_state":"0e1241e3-66fd-4ca1-a0dd-c0d1a6a5c708","email_verified":false,"sub":"25303e44-...e2c1757ae857","acr":"1","preferred_username":"logoutuser","auth_time":1619614598,"exp":1619614898,"at_hash":"5BNT...j414r72LU6g"}

Ok, this took me some time. It may also be that most tutorials out there leave this vulnerability (only in a setup where a single nginx is using multiple realms) of having authentication access one realm will allow authentication access to any other.
a typical autentication call from tutorials out there is:
location /service1/ {
access_by_lua_block {
local opts = {
redirect_uri_path = "/realm1/authenticated",
discovery = "https://<myserver>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
client_id = "client1",
client_secret = <........>,
session_contents = {id_token=true} -- this is essential for safari!
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts)
if err then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
# I disbled caching so the browser won't cache the site.
expires 0;
add_header Cache-Control private;
proxy_pass http://realm1-server:port/service1/;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location /service2/ {
<same for ream2>
}
There seem actually to be two issues
We do not check for realm id (this is a vulnerability)
Sessions for the two realms are cached interchangeably (this would lead to a situation where if we fix (1), we would now be allowed to access only one realm and have to logout from realm1 to access realm2)
Solutions:
1)We need to explicitly check that the realm is correct
2)We should use one session table per realm (notice that although this would appear to solve (1) as well, it does not if an attacker mix and maches session ids with his "special" browser -- at least i think)
For no2 there was no documentation, I had to read the code of openidc.lua and from there the code for the library this library uses (session.lua)
The changes are as follows:
location /service1/ {
access_by_lua_block {
local opts = {
redirect_uri_path = "/realm1/authenticated",
discovery = "https://<myserver>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
client_id = "client1",
client_secret = <........>,
session_contents = {id_token=true} -- this is essential for safari!
}
-- call introspect for OAuth 2.0 Bearer Access Token validation
local res, err = require("resty.openidc").authenticate(opts,nil,nil,{name=opts.client_id})
if (err or ( res.id_token.azp ~= opts.client_id ) ) then
ngx.status = 403
ngx.say(err)
ngx.exit(ngx.HTTP_FORBIDDEN)
end
}
<..................no changes here................>
}
location /service2/ {
<same for ream2>
}

Related

why nginx proxy_pass to sub directory is not working?

I am trying to do a proxy_pass to a subdirectory http://ms.server.com:8085/ms. So whenever anyone hit on http://ms.example.com' it should be redirected tohttp://ms.example.com/ms`. I am trying to do that through below configuration
upstream example {
server ms.server.com:8085;
}
server {
server_name ms.example.com;
location / {
proxy_pass http://example/ms;
}
}
Now I am redirecting to "Nginx Test page".
proxy_pass is used to inform Nginx where to send proxied requests at a host level, and does not account for the URI.
What you want to use instead is a return statement, like so:
upstream example {
server ms.server.com:8085;
}
server {
server_name ms.example.com;
location = / {
return 301 http://ms.example.com/ms;
}
location / {
proxy_pass http://example;
}
}
This will redirect requests to the root URL of the server_name http://ms.example.com using a 301 redirect, and all other traffic will be passed through to the defined upstream.

nginx redirection depending on host

I have two domains website1.com and website2.com linked to my server.
I'm trying to do the following rewrite rules:
http://website1.com/ --> /website1/ (static)
http://website2.com/ --> /website2/ (static)
http://website1.com/app/ --> http://localhost:8080/web-app/web1/
http://website2.com/app/ --> http://localhost:8080/web-app/web2/
The user will be redirected to a static website served by nginx or an application server depending on the url.
Here's what I tried so far:
location / {
root html;
index index.html index.htm;
if ($http_host = website1.com) {
rewrite / /website1/index.html break;
rewrite (.*) /website1/$1;
}
if ($http_host = website2.com) {
#same logic
}
}
location /app/ {
proxy_pass http://localhost:8080/web-app/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
if ($http_host = website1.com) {
rewrite /app/(.*) /$1 break;
rewrite /app /index.html;
}
if ($http_host = website2.com) {
#same logic
}
}
The static part seems to work fine, but the redirection web app part seems to serve index.html no matter what the requested file is.
This is not much of a definitive answer, but rather just my explanation of how I get nginx proxies to work.
root html;
index index.html index.htm;
server {
listen 80;
server_name website1.com;
location / {
alias html/website1;
}
location /app/ {
proxy_pass http://localhost:8080/web-app/web1/
}
}
server {
listen 80;
server_name website2.com;
location / {
alias html/website2;
}
location /app/ {
proxy_pass http://localhost:8080/web-app/web2/
}
}
The issue looks like it's being caused by these rewrites:
rewrite /app/(.*) /$1 break;
rewrite /app /index.html;
Using server blocks with server_names and the alias directive, we can do away with needing to use that much logic. Let me know if there's anything that is still not clear.
I think you're doing it wrong. If there is so much difference between the hosts, it would be cleaner and more efficient to have two distinct configurations, one for each host.
On the other hand, if your intention is to have multiple almost-identical configurations, then the correct way to go about it might be map, and not if.
Back to your configuration — I've tried running it just to see how it works, and one thing that you may notice is that the path you specify within the proxy_pass effectively becomes a noop once the $host-specific rewrite within the same context gets involved to change the $uri — this is by design, and is very clearly documented within http://nginx.org/r/proxy_pass ("When the URI is changed inside a proxied location using the rewrite directive").
So, in fact, using the following configuration does appear to adhere to your spec:
%curl -H "Host: host1.example.com" "localhost:4935/app/"
host1.example.com/web-app/web1/
%curl -H "Host: host2.example.com" "localhost:4935/app/"
host2.example.com/web-app/web2/
%curl -H "Host: example.com" "localhost:4935/app/"
example.com/web-app/
Here's the config I've used:
server {
listen [::]:4935;
default_type text/plain;
location / {
return 200 howdy;
}
location /app/ {
proxy_set_header Host $host;
proxy_pass http://localhost:4936/web-app/;#path is NOOP if $uri get changed
if ($host = host1.example.com) {
rewrite /app/(.*) /web-app/web1/$1 break;
rewrite /app /web-app/index.html;
}
if ($host = host2.example.com) {
rewrite /app/(.*) /web-app/web2/$1 break;
rewrite /app /web-app/index.html;
}
}
}
server {
listen [::]:4936;
return 200 $host$request_uri\n;
}

NGINX Websocket 302 (redirect) error

http://prntscr.com/coliya -Chrome
http://prntscr.com/coljez -Opera
NGINX
server {
listen 0.0.0.0:80;
listen 0.0.0.0:443 ssl;
root /usr/share/nginx/html;
index index.html index.htm;
ssl on;
sslcertificate /etc/ssl/certs/ssl-bundle.crt;
sslcertificatekey /etc/ssl/private/budokai-onlinecom.key;
sslciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM- SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kED$
ssldhparam /etc/ssl/private/dhparmas.pem;
sslpreferserverciphers on;
sslprotocols TLSv1 TLSv1.1 TLSv1.2;
if ($sslprotocol = "") {
rewrite ^ https://$host$requesturi? permanent;
}
largeclientheader_buffers 8 32k;
location / {
proxyhttpversion 1.1;
proxysetheader Accept-Encoding "";
proxysetheader X-Real-IP $remoteaddr;
proxysetheader Host $host;
proxysetheader X-Forwarded-For $proxyaddxforwardedfor;
proxysetheader XFORWARDEDPROTO https;
proxysetheader X-NginX-Proxy true;
proxybuffers 8 32k;
proxybuffersize 64k;
proxysetheader Upgrade $httpupgrade;
proxysetheader Connection "Upgrade";
proxyreadtimeout 86400;
proxypass http://budokai-online.com:8080 ;
}
The problem I'm having is that some computers and some browsers are being redirected when trying to get a connection to the websocket. When that 302 error shows up, the '/*' routes has been activated! This route redirects the user to the login page as you saw in the redirect response.The websocket upgrade request is turned into an ordinary http request somehow, somewhere! This seems to be where the problem is. What can be causing this?
i had the same problem but it was related to varnish settings. if you are using varnish add:
sub vcl_recv {
if (req.http.upgrade ~ "(?i)websocket") {
return (pipe);
}
}
sub vcl_pipe {
if (req.http.upgrade) {
set bereq.http.upgrade = req.http.upgrade;
set bereq.http.connection = req.http.connection;
}
}
check this link for reference:
https://varnish-cache.org/docs/4.1/users-guide/vcl-example-websockets.html

nginx lua not able to set headers before rewrite

I have a set of ip address in my redis server to be blocked. Now when a client makes a request,
the request must be intercepted by nginx
check if the remote_addr belongs to the blocked ip
add a header if the ip is blocked
then redirect to the actual ip address with the request_uri.
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
lua_shared_dict ip_status 1m;
server {
listen 9080;
server_name localhost;
location ~ .* {
rewrite_by_lua_file src/ip_check.lua;
}
}
}
src/ip_check.lua
-- redis configuration
local redis_host = "127.0.0.1"
local redis_port = 6379 -- connection timeouts for redis in ms.
local redis_max_idle_timeout = 10000
local redis_pool_size = 2 -- don't set this too high!
local redis_timeout = 200
-- check a set with this key for blacklist entries
local redis_key = ngx.var.remote_addr
local ip_status = ngx.shared.ip_status
local status_unblocked = "0"
local status_blocked = "1"
-- cache lookups for this many seconds
local cache_ttl = 1800
local redirect_host = "http://192.168.12.103:8080/Spring4MVCHelloWorld1"
local header_ip_status_key = "Ip-Status"
-- value of the header to be sent when the client ip address is blocked
local header_ip_status_value = "block"
local add_header = status_unblocked
-- lookup the value in the cache
local cache_result = ip_status:get(redis_key)
if cache_result then
if cache_result == status_blocked then
add_header = status_blocked
end
else
-- lookup against redis
local resty = require "resty.redis"
local redis = resty:new()
redis:set_timeout(redis_timeout)
local connected, err = redis:connect(redis_host, redis_port)
if not connected then
ngx.log(ngx.ERR, "ip_check: could not connect to redis #"..redis_host..":"..redis_port.." - "..err)
else
ngx.log(ngx.ALERT, "ip_check: found connect to redis #"..redis_host..":"..redis_port.." - successful")
local result, err = redis:get(redis_key)
if not result then
ngx.log(ngx.ERR, "ip_check: lookup failed for "..ngx.var.remote_addr.." - "..err)
else
if result == status_blocked then
add_header = status_blocked
end
-- cache the result from redis
ip_status:set(ip, add_header, cache_ttl)
end
redis:set_keepalive(redis_max_idle_timeout, redis_pool_size)
end
end
ngx.log(ngx.ALERT, "ip_check: "..header_ip_status_key.." of "..ngx.var.remote_addr.." is "..add_header)
if add_header == status_blocked then
ngx.header[header_ip_status_key] = header_ip_status_value
ngx.req.set_header(header_ip_status_key, header_ip_status_value)
end
ngx.redirect(redirect_host..ngx.var.request_uri)
For testing purpose, i added 127.0.0.1 key to redis with value 1. So, the redirect uri should be hit with the additional header. The problem i'm facing is no matter what i use ngx.header or ngx.req.set_header the Ip-Status header is not sent to the redirect request and the end api does not recieve it.
For example if i hit http://localhost:9080/hello in the browser,
Request headers
Host:"localhost:9080"
User-Agent:"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0"
Accept:"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
Accept-Language:"en-US,en;q=0.5"
Accept-Encoding:"gzip, deflate"
Connection:"keep-alive"
Response headers
Connection:"keep-alive"
Content-Length:"166"
Content-Type:"text/html"
Date:"Thu, 05 May 2016 08:06:33 GMT"
Location:"http://192.168.12.103:8080/Spring4MVCHelloWorld1/hello"
Server:"openresty/1.9.7.2"
ip-status:"block"
The redirected uri is http://localhost:9080/hello,
Request headers
Host:"192.168.12.103:8080"
User-Agent:"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:46.0) Gecko/20100101 Firefox/46.0"
Accept:"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
Accept-Language:"en-US,en;q=0.5"
Accept-Encoding:"gzip, deflate"
Cookie:"JSESSIONID=4834843FE0E76170E429028E096A66E5"
Connection:"keep-alive"
Response headers
Content-Language:"en-US"
Content-Length:"166"
Content-Type:"text/html;charset=UTF-8"
Date:"Thu, 05 May 2016 08:06:33 GMT"
Server:"Apache-Coyote/1.1"
I'm able to see Ip-Status header in response headers of the original request but not in the request headers of the redirected uri. Any help as to how to send header to the redirected uri will be very helpful.
I'm new to nginx and lua. Asking because i couldn't find any corresponding questions apologies if the question is already asked.
It is the browser that is redirecting to the redirected uri and nginx do not have control on the headers. So, i removed
ngx.redirect(redirect_host..ngx.var.request_uri)
from the src/ip_check.lua and changed nginx.conf to make a proxy call and i was able to observe that the api was able to receive the additional header.
Modified nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
lua_shared_dict ip_status 1m;
server {
listen 9080;
server_name localhost;
location ~ .* {
set $backend_host "http://192.168.12.103:8080/Spring4MVCHelloWorld1";
access_by_lua_file src/ip_check.lua;
proxy_pass $backend_host$request_uri;
}
}
}
This modified nginx.conf will make a request to $backend_host$request_uri and browser will not have any knowledge of the redirections made. Hence, the headers set by ngx.req.set_header will be sent when making the proxy call. So,
ngx.header[header_ip_status_key] = header_ip_status_value
can also be removed from src/ip_check.lua.

how to get remote_ip from socket in phoenix-framework?

How to get remote_ip from socket in phoenixframework? I can get it from conn in View, but not in Channel.
Many thanks for help!
Copy of the answer provided here: https://elixirforum.com/t/phoenix-socket-channels-security-ip-identification/1463/3 (all the credit goes to https://elixirforum.com/u/arjan)
Phoenix 1.4 update:
Since Phoenix 1.4, you can get connection information from the underlying transport. What kind of information you get is transport dependent, but with the WebSocket transport it is possible to retrieve the peer info (ip address) and a list of x- headers (for x-forwarded-for resolving).
Configure your socket like this in your endpoint.ex:
socket("/socket", MyApp.Web.UserSocket,
websocket: [connect_info: [:peer_data, :x_headers]],
longpoll: [connect_info: [:peer_data, :x_headers]]
)
And then your UserSocket module must expose a connect/3 function like this:
def connect(_params, socket, connect_info) do
{:ok, socket}
end
On connect, the connect_info parameter now contains info from the transport:
info: %{
peer_data: %{address: {127, 0, 0, 1}, port: 52372, ssl_cert: nil},
x_headers: []
}
UPDATE
If your Phoenix app is not handling traffic directly and receives it from reverse proxy like nginx then peer_data will have nginx IP address, not the client's.
To fix this you can tell nginx (or whatever proxy you use) to pass original IP in the headers and then read it later.
So your phoenix location should look something like this:
location /phoenix/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://phoenix/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
and your socket code should have this:
defp get_ip_address(%{x_headers: headers_list}) do
header = Enum.find(headers_list, fn {key, _val} -> key == "x-real-ip" end)
case header do
nil ->
nil
{_key, value} ->
value
_ ->
nil
end
end
defp get_ip_address(_) do
nil
end
and change connect to something like this
def connect(params, socket, connect_info) do
socket = assign(socket, :ip_address, get_ip_address(connect_info))
{:ok, socket}
end
The answer right now is: you can't. You can't access the connection in channels because channels are transport agnostic. Open up an issue in Phoenix detailing your user case so the Phoenix team can act on it.
Good news! as of LiveView 0.17.7 it's available out of the box:
see https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html
to summarize:
in endpoint.ex find the socket definition and add
socket "/live", Phoenix.LiveView.Socket, websocket: [connect_info: [:peer_data, session: #session_options]]
in the socket mount() function
def mount(_params, _session, socket) do
peer_data = get_connect_info(socket, :peer_data)
{:ok, socket}
end
note: it's only available on mount() and terminate()