I deploy my gateway api on k8s, I sync token with spring session hazelcast. It's work fine until I scale replica to 2, hazelcast session cluster worked but when I login (with keycloak) and then when call API, some api get 500 internal error and gateway log error "Expired token".
I don't understand why, because it's a new session in session cluster why it expired ?
Related
I deployed a containerized app to my IBM Cloud Kubernetes service in a VPC. The app uses App ID for authentication. The deployment pipeline ran successfully. The app seems ready, but when accessing its URL it gives an internal server error (500 status code).
From the Kubernetes dashboard I found that the ALB Oauth Proxy add-on is failing. It is deployed, but does not start.
The deployment seems to fail in the health checks (ping not successful). From the POD logs I found the following as last (and only) entry:
[provider.go:55] Performing OIDC Discovery...
Else, there is not much. Any advise?
Guessing from the missing logs and the failing pings, it seemed related to some network setup. Checking the VPC itself, I found that there was no Public Gateway attached to the subnet. Enabling it allowed outbound traffic. The oauth proxy could contact the App ID instance. The app is working as expected now.
Make sure that the VPC subnets allow outbound traffic and have a Public Gateway enabled.
I have a keycloak Server running on my localhost with port 8081.
I'm trying to connect my Quarkus application with it to secure REST-Endpoints.
However I'm not able to Login to my Keycloak server.
I annotated an /test endpoint with #RolesAllowed("user"). Since then I can't access the endpoint but I get an Empty page with a 401 Unauthorized error in the Web console.
What I want is that I get redirected to the Keycloak default page so I can authorize myself. Any ideas why that is not happening?
Here is my application.properties Keycloak configuration:
quarkus.oidc.auth-server-url=http://localhost:8081/realms/TestRealm
quarkus.oidc.client-id=testclient
quarkus.oidc.credentials.secret=MYSECRET
quarkus.oidc.tls.verification=none
quarkus.keycloak.policy-enforcer.enable=false
logging.level.org.keycloak=DEBUG
resteasy.role.based.security=true
quarkus.http.cors=true
quarkus.http.port=8080
when I set policy enforcer to true I can't access any endpoint.
TestRealm has a Resource configured with a /test endpoint.
In the Quarkus documentation for keycloak they said that you don't need to setup your own Keycloak Server in Dev mode since Quarkus comes with one. Might that be the Problem? is my Quarkus Application not connecting to my Keycloak server? And if so, how can I force quarkus in dev mode to use my Keycloak server?
EDIT: I figured out that I have access to my endpoint if I send the request with the Bearer token, so I guess Quarkus is accessing my Keycloak instance.
Still, why don't I get forwarded to the default Keycloak login page when trying to access my Rest endpoint via my browser? Am I missing any configuration?
For anyone with the same issue I fixed it by adding:
quarkus.oidc.auth-mechanism=keycloak
quarkus.oidc.application-type=web-app
quarkus.http.auth.permission.authenticated.paths=/*
quarkus.http.auth.permission.authenticated.policy=authenticated
To the config
Prerequisites
I have two Cloud Run services a frontend and a backend. The frontend is written in Vue.js/Nuxt.js and is using a Node backend therefore. The backend is written in Kotlin with Spring Boot.
Problem
To have an authenticated internal communication between the frontend and the backend I need to use a token thttps://cloud.google.com/run/docs/authenticating/service-to-service#javahat is fetched from the google metaserver. This is documented here: https://cloud.google.com/run/docs/authenticating/service-to-service#java
I did set it all up and it works.
For my second layer of security I integrated the Auth0 authentication provider both in my frontend and my backend. In my frontend a user can log in. The frontend is calling the backend API. Since only authorized users should be able to call the backend I integrated Spring Security to secure the backend API endpoints.
Now the backend verifies if the token of the caller's request are valid before allowing it to pass on to the API logic.
However this theory does not work. And that simply is because I delegate the API calls through the Node backend proxy. The proxy logic however is already applying a token to the request to the backend; it is the google metaserver token. So let me illustrate that:
Client (Browser) -> API Request with Auth0 Token -> Frontend Backend Proxy -> Overriding Auth0 Token with Google Metaserver Token -> Calling Backend API
Since the backend is receiving the metaserver token instead of the Auth0 Token it can never successfully authorize the API call.
Question
Due the fact that I was not able to find any articles about this problem I wonder if it's simply because I am doing it basically wrong.
What do I need to do to have a valid Cloud Run Service to Service communication (guaranteed by the metaserver token) but at the same time have a secured backend API with Auth0 authorization?
I see two workarounds to make this happen:
Authorize the API call in the Node backend proxy logic
Make the backend service public available thus the metaserver token is unnecessary
I don't like any of the above - especially the latter one. I would really like to have it working with my current setup but I have no idea how. There is no such thing like multiple authorization token, right?
Ok I figured out a third way to have a de-facto internal service to service communication.
To omit the meta-server token authentication but still restrict access from the internet I did the following for my backend cloud run service:
This makes the service available from the internet however the ingress is preventing any outsider from accessing the service. The service is available without IAM but only for internal traffic.
So my frontend is calling the backend API now via the Node backend proxy. Even though the frontend node-backend and the backend service are both somewhat "in the cloud" they do not share the same "internal network". In fact the frontend node-backend requests would be redirected via egress to the internet and call the backend service just like any other internet-user would do.
To make it work "like it is coming from internal" you have to do something similar like VPN but it's called VPC (Virtual Private Cloud). And luckily that is very simple. Just create a VPC Connector in GCP.
BUT be aware to create a so called Serverless VPC Access (Connector). Explained here: https://cloud.google.com/vpc/docs/serverless-vpc-access
After the Serverless VPC Access has been created you can select it in your Cloud Run Service "Connection" settings. For the backend service it can be simply selected. For the frontend service however it is important to select the second option:
At least that is important in my case since I am calling the backend service by it's assigned service URL instead of a private IP.
After all that is done my JWT token from the frontend is successfully delivered to the backend API without being overwritten by a MetaServer token.
I am using jwt authentication between two services written in django. The authentication is working on local machine. But when I run the same services in a kubernetes cluster, I get authentication error.
Also, when I make the decorator above the api to #permission_classes([AllowAny, ]) in order to avoid any authentication check but still pass the token in the header, I get the unauthorization error 401 in Kubernetes cluster.
Does anyone have any idea on how to do jwt authentication between two django services running in a kubernetes cluster in django?
Following: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#option-2-use-the-token-option
I want to be able to connect to project / cluster context to our GKE clusters.
Normally, one would use gcloud, and login with a browser, or with a password json file.
Is it possible to authenticate with just a service account token that you can feed into kubectl (without using gcloud)?
I cannot get the above documentation working, doesn't seem to connect me to gcloud as I get:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Never able to connect outside of a local context.
I'm wondering if this is even possible, to connect to GKE clusters using nothing but a service account token?