GET of JWKS_URI is failing when using angular-oauth2-oidc library - keycloak

I am using angular-oauth2-oidc to connect to Keycloak server which is behind nginx.
I am using code flow for retrieving the token.
Even when using HTTP also, I see a GET request being sent for /protocol/openid-connect/certs
which is failing since the frontend APIs are having the correct hostname, but the backend APIs (jwks_uri) is having localhost.
Also, I am unable to set the KC_HOSTNAME_STRICT_BACKCHANNEL=true since the backend microservices will not be able to connect to Keycloak.
Is there a workaround for this.

Related

Whitelist web application for API access without API key

We're developing a web application (SPA) consisting of the following parts:
NextJS container
Django backend for user management
Data API (FastAPI) protected with API keys, for which we also provide 3rd party access
The NextJS container uses an API key to access the data API. We don't want to expose the API key to the client (browser), so the browser sends the API requests to the NextJS container, which then relays it to the data API, see here. This seems secure, but is more complicated and slower than sending requests from the browser to the data API directly.
I'm wondering if it's possible to whitelist the web application in the data API, so that the client (browser) can call the data API directly without API key, but 3rd parties can't. FastAPI provides a TrustedHostMiddleware, but it's insecure because it's possible to spoof the host header. It has been suggested to whitelist IPs instead, but we don't have a dedicated IP for our web application. I looked into using the referer header, but it's not available in the FastAPI request object for some reason (I suspect some config problem in our hosting). Also, the referer header could be spoofed as well.
Is there even a safe way to whitelist our web application for data API access, or do we need to relay the request via NextJS container and use an API key?
Is there even a safe way to whitelist our web application for data API
access,
No, you need in all case an Authentication mechanism, something before the backend that check if the client is an authorize client.
The simplest pattern is using the NextJS container as the proxy. Proxy that have an api-key to call the backend ( what you are currently doing ).
There is many way to implement a secured proxy for a backend, but this authentication logic should not be inside the backend but in a separate service ( like envoy , nginx ... )

How to use JWT Auth0 token for Cloud Run Service to Service communication if the Metaserver Token is overriding the Auth0 Token

Prerequisites
I have two Cloud Run services a frontend and a backend. The frontend is written in Vue.js/Nuxt.js and is using a Node backend therefore. The backend is written in Kotlin with Spring Boot.
Problem
To have an authenticated internal communication between the frontend and the backend I need to use a token thttps://cloud.google.com/run/docs/authenticating/service-to-service#javahat is fetched from the google metaserver. This is documented here: https://cloud.google.com/run/docs/authenticating/service-to-service#java
I did set it all up and it works.
For my second layer of security I integrated the Auth0 authentication provider both in my frontend and my backend. In my frontend a user can log in. The frontend is calling the backend API. Since only authorized users should be able to call the backend I integrated Spring Security to secure the backend API endpoints.
Now the backend verifies if the token of the caller's request are valid before allowing it to pass on to the API logic.
However this theory does not work. And that simply is because I delegate the API calls through the Node backend proxy. The proxy logic however is already applying a token to the request to the backend; it is the google metaserver token. So let me illustrate that:
Client (Browser) -> API Request with Auth0 Token -> Frontend Backend Proxy -> Overriding Auth0 Token with Google Metaserver Token -> Calling Backend API
Since the backend is receiving the metaserver token instead of the Auth0 Token it can never successfully authorize the API call.
Question
Due the fact that I was not able to find any articles about this problem I wonder if it's simply because I am doing it basically wrong.
What do I need to do to have a valid Cloud Run Service to Service communication (guaranteed by the metaserver token) but at the same time have a secured backend API with Auth0 authorization?
I see two workarounds to make this happen:
Authorize the API call in the Node backend proxy logic
Make the backend service public available thus the metaserver token is unnecessary
I don't like any of the above - especially the latter one. I would really like to have it working with my current setup but I have no idea how. There is no such thing like multiple authorization token, right?
Ok I figured out a third way to have a de-facto internal service to service communication.
To omit the meta-server token authentication but still restrict access from the internet I did the following for my backend cloud run service:
This makes the service available from the internet however the ingress is preventing any outsider from accessing the service. The service is available without IAM but only for internal traffic.
So my frontend is calling the backend API now via the Node backend proxy. Even though the frontend node-backend and the backend service are both somewhat "in the cloud" they do not share the same "internal network". In fact the frontend node-backend requests would be redirected via egress to the internet and call the backend service just like any other internet-user would do.
To make it work "like it is coming from internal" you have to do something similar like VPN but it's called VPC (Virtual Private Cloud). And luckily that is very simple. Just create a VPC Connector in GCP.
BUT be aware to create a so called Serverless VPC Access (Connector). Explained here: https://cloud.google.com/vpc/docs/serverless-vpc-access
After the Serverless VPC Access has been created you can select it in your Cloud Run Service "Connection" settings. For the backend service it can be simply selected. For the frontend service however it is important to select the second option:
At least that is important in my case since I am calling the backend service by it's assigned service URL instead of a private IP.
After all that is done my JWT token from the frontend is successfully delivered to the backend API without being overwritten by a MetaServer token.

Protect my RESTful services by a Keycloak Oauth2 Provider Using NGINX

We have some Restful services on a certain URI and we wanted to publish our services on the web to use them in our mobile app(written in java),
Our services was on a server which cannot handle too much requests at a same time and used it's proxy_pass functionality for this,
So I used Nginx on an intermediate server to control access to our REST server,
Now we want to protect our services by Oauth2 with Password or Client Credentials(as our mobile users should not login into our servers we cannot display any login page to them),
I setup a Keycloak server which is working and I could get token for my client. I'm going to give my auth/token URI to our mobile developers to get Oauth2 token at first and use it in their requests.
The problem is I don't know how to configure Nginx to authorize incoming REST requests with provided token in request header.
Should I config Keycloak to Handle requests and forward authorized ones to NGINX?
Thanks for your help
After some tries I found this solution:
1- you have to add njs module to nginx this is necessary, you have to compile it first(so in windows it will be much trouble, I tried mingw and stopped at a dependency called expect which is not written for mingw and it wasted a lot of time from me, actually I moved our IAM to ubuntu and compiling njs and nginx in there was done in few minutes!)
2- Introspection is the key subject here, and keycloak supports it, its URI is the same as token URI plus introspect, using Basic Authorization in header and token in body
3- nginx also supports introspection after adding njs module to it, which will let nginx support js code inculde inside config file, a great exaple is NGINX Demoes- Oauth2 Introspection OSS, just copy config file and oauth2.js file and its done. I added api directive at location tag in nginx config file to let callers know it is protected.
4- create one client for nginx in keycloak to do the introspection operation, it should be in confidential mode and Service Accounts should be enabled for it.
5- nginx should forward(proxy pass) auth/token request to IAM, so a location for this should be added in config file.
6- [for ubuntu] I have an error in nginx which tells me it cannot resolve localhost! and installing Bind 9 fixes this for me(another time wasting effort was done here).
7- So any one wants to use our service should request token first and then sends its request attached with token to nginx, nginx introspects token and if token was ok and {"active": true} were received forwards request to resource and passws reply to requester.
All done.

REST API with Single Page Application over HTTPS on Firefox only

I am developing a web service using REST API. This REST API is running on port 6443 for HTTPS. Client is going to be a Single page application running on port 443 for HTTPS on same machine. The problem I am facing is:
While I hit the url say: https://mymachine.com/new_ui I get certificate exception for an invalid certificate because I use a self signed one, so mymachine.com:443 gets added to server exception. But still requests doen't go to REST API as they are running on https://mymachine.com:6443/restservice. If I manually add mymachine.com:6443 to server exception on firefox it works but it will not be the case in production for customers.
Some options that I thought are:
1. Give another pop up and ask to add REST server on port 6443 exception too.But this doesn't look proper as why an end user should accept the cerf for same domain twice. Also REST api server port can change.
Can we programmatically add exception for domain and both the ports in one shot? Ofcourse with the consent of the user. 3. Use a reverse proxy. But then its going to have memory footprint on our system. Also it will be time consuming.
Please suggest some options. How do I deal with it. Thank you

Behaviour of SAML when HTTP Server used for high availability

I have implemented the supporting of SAML SSO to have my application act as the Service Provider using Spring Security SAML Extension. I was able to integrate my SP with different IDPs. So for example I have HostA,HostB, and HostC, all these have different instances of my application. I had an SP metadata file specified for each host and set the AssertionConsumerServiceURL with the URL of that host( EX:https:HostA.com/myapp/saml/sso ). I added each metadata file to the IDP and tested all of them and it is working fine.
However, my project also supports High Availability by having an IBM HTTP Server configured for load balancing. So in this case the HTTP Server will configure the hosts(A,B,C) to be the hosts used for load balancing, the user will access the my application using the URL of the HTTP server: https:httpserver.com/myapp/
If I defined one SP metadata file and had the URL of the HTTP Server specified in the AssertionConsumerServiceURL( https://httpserver.com/saml/sso ) and changed my implementation to accept assertions targeted to my HTTP Server, what will be the outcome of this scenario:
User accesses the HTTPServer which dispatched the user to HostA(behind the scenes)
My SP application in HostA sends a request to the IDP for authentication.
The IDP sends back the response to my httpserver as: https://httpserver.com/saml/sso .
Will the HTTP Server redirect to HostA, to have it like this: https://HostA.com/saml/sso
Thanks.
When deploying same instance of application in a clustered mode behind a load balancer you need to instruct the back-end applications about the public URL on the HTTP server (https://httpserver.com/myapp/) which they are deployed behind. You can do this using the SAMLContextProviderLB (see more in the manual). But you seem to have already successfully performed this step.
Once your HTTP Server receives a request, it will forward it to one of your hosts to URL e.g. https://HostA.com/saml/sso and usually will also provide the original URL as an HTTP header. The SAMLContextProviderLB will make the SP application think that the real URL was https://httpserver.com/saml/sso which will make it pass all the SAML security checks related to destination URL.
As the back-end applications store state in their HttpSessions make sure to do one of the following:
enable sticky session on the HTTP server (so that related requests are always directed to the same server
make sure to replicate HTTP session across your cluster
disable checking of response ID by including bean EmptyStorageFactory in your Spring configuration (this option also makes Single Logout unavailable)