I have installed the kong-jwt2header plugin in the Kong with configuration config.token_required=true.
I also have configured the Identity service in kong which is used to generate the JWT token. As we set the config.token_required=true in kong-jwt2header plugin throws an error {"error": "No valid JWT token found"} while requesting a JWT token from the Identity service.
Now I have two options
Set the config.token_required=false Or
Keep the Identity service outside the Kong gateway.
So that JWT token is not required while requesting a token from an identity service.
Which is the best way from the above two? Or are there any better ways we can send the claims upstream?
Related
I have set up keycloak-oidc on kong, and I have a protected API behind kong. I am able to call keycloak through kong because I added a filter /auth/*. Below is my oidc configuration for keycloak.
I configured my REALM and CLIENT_ID on keycloak as follows:
When I call the protected API with Bearer token acquired from keycloak, I am unable to reach the protcted API as Keycloak returns
{ "error": "invalid_request", "error_description": "Missing parameter: username" }
I have turned off the Standard Flow, yet I am unable to get authenticated by keyclaok and be passed on to the protected API.
Please what am I doing wrong?
First of all, I had to upgrade my kong-oidc from kong-oidc 1.0.1 to kong-oidc 1.1.0, then I simply just updated my introspection endpoint in the oidc plugin configuration as shown below, in the images I shared in the question above, the introspection endpoint field was not present and hence could not be set until after the upgrade
When a request with the bearer token hits a microservice, does microservice talk to keycloak to validate the token for each request?
Is traffic "Step 5" configurable via keycloak adapter?
No, that would make too many requests. In initialization phase microservice loads public key and signing algorithm from Keycloak’s well known config page. On each request microservice checks the signature of the bearer token.
Access token lifespan should not be too long and that is how you force your frontend to periodically go to Keycloak and refresh the bearer.
If you run your microservice, every time you send a request to an api after adding the token in the logs you will see "Loaded URLs from http://localhost:8080/auth/realms/{realm-name}/.well-known/openid-configuration". Upon clicking this link you will see that there are a set of URLs present here, endpoints for token generation, userinfo etc.,there are endpoints for getting the certs and signing keys as well via which the signing key of the token is verified.
(This will only happen if keycloak properties are defined in application.properties/application.yml)
Step 5 will happen on using Keycloak adapter (Choice of adapter given in keycloak documentation)
I want to create base authentication in kubernetes. every document say that I should create CSV or file then enter the username and password in it. but I do not want to use file I want to some database or kubernetes handle it.
what can I do for base authentication?
You can based your authentication on tokens if you don't want to use static pasword file.
First option:
Service Account Tokens
A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests.
The plugin uses two flags(which are optional):
Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the ServiceAccount Admission Controller. Bearer tokens are mounted into pods at well-known locations, and allow in-cluster processes to talk to the API server. Accounts may be explicitly associated with pods using the serviceAccountName field of a PodSpec.
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, simply use the kubectl create serviceaccount (NAME) command. This creates a service account in the current namespace and an associated secret.
The created secret holds the public CA of the API server and a signed JSON Web Token (JWT).
The signed JWT can be used as a bearer token to authenticate as the given service account. See above for how the token is included in a request. Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well.
There is some drawbacks because service account tokens are stored in secrets, any user with read access to those secrets can authenticate as the service account. Be careful when granting permissions to service accounts and read capabilities for secrets.
Second:
Install OpenID Connect (full documentation you can find here: oidc).
OpenID Connect (OIDC) is a superset of OAuth2 supported by some service providers, notably Azure Active Directory, Salesforce, and Google. The protocol’s main addition on top of OAuth2 is a field returned with the access token called an ID Token. This token is a JSON Web Token (JWT) with well known fields, such as a user’s email, signed by the server.
To identify the user, the authenticator uses the id_token (not the access_token) from the OAuth2 token response as a bearer token.
Since all of the data needed to validate who you are is in the id_token, Kubernetes doesn’t need to “phone home” to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication.
Kubernetes has no “web interface” to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
There’s no easy way to authenticate to the Kubernetes dashboard without using the kubectl proxy command or a reverse proxy that injects the id_token.
More information you can find here: kubernetes-authentication.
I am trying to enable JWT authentication for my backend java microservice which is deployed locally and all the requests to the microservice is gated through WSO2 apim 2.6 .The JWT token provider is used as WSO2 IS 5.6 .
I have placed all required configurations both at WSO2 IS and WSO2 apim on my machine.Since both are on same machine I have configured an offset of 1 too.
I created a fresh user in apim store and used it to create application and subscribe api for the same user.The Token type configured is JWT .I used Postman as client for fetching the access token and the access token gets fetched as expected.Thereafter when I use the same token to access the required resource through api gateway it gives me back "Unclassified Authentication Failure" with code as "0" and description as "Access failure for API: /notification/1.0, version: 1.0 status: (0) - Unclassified Authentication Failure"
<ams:fault xmlns:ams="http://wso2.org/apimanager/security">
<ams:code>0</ams:code>
<ams:message>Unclassified Authentication Failure</ams:message>
<ams:description>Access failure for API: /notification/1.0, version: 1.0 status: (0) - Unclassified Authentication Failure</ams:description>
</ams:fault>
I am expecting the resource to get created as it is a post request via WSO2 apim to backend service.Please share any available insights on this
The token type JWT can only be used with api manager micro-gateways. You create OAuth application and try using the JWT grant type for it. You can find more information about the JWT grant type in
https://docs.wso2.com/display/AM260/JWT+Grant#JWTGrant-JWTBearerGrant
We are using Apigee as our Authorization Server (AS) and we have a few Spring Restful services deployed in IBM Bluemix public cloud which acts as our Resource server (RS).
Each of the services has an equivalent proxy service configured in Apigee. For the proxy services, we have configured the VerifyOAuthTokens policy to verify the token passed by the user and return an error if invalid token is passed
The problem is, since our RS is in the public cloud (no plans or need of moving to a dedicated or private cloud) the api endpoints are open and can be invoked by anyone who knows the url.Though the expectation is everyone should call the apis via APIGEE proxies but we cannot force that since we are in public cloud and there are no options of opening ports coming from apigee or something. We would like to take the following approach to secure the api endpoints.
Accept the Authorization header for each call
Take the token and call a validate token service in Apigee
For 2, We are not able to find an APIGEE api which can validate an access token similar to say googles
https://www.googleapis.com/oauth2/v1/tokeninfo?access_token=1/fFBGRNJru1FQd44AzqT3Zg
or Github's
GET /applications/:client_id/tokens/:access_token
Is there actually an external APIGEE service to validate a token?
If not, what would be the best way to make sure that only valid users with valid tokens can access the apis?
Thanks,
Tatha
Did you look at this post in the Apigee Community: Using third-party OAuth tokens
We did something similar to this but not using oauth tokens. We used Apigee to do a callout to a third party IDP (identity provider). The 3rd party IDP wasn't able to generate tokens but exposed a web service to authenticate the user. If the user was authenticated successfully (based on interpreting the result received back from the target endpoint webservice), then you tell Apigee that it was successful by setting the external authorization status to true (step #2 in the link).
NOTE: this has to be done inside an Assign Message Policy step PRIOR to the GenerateAccess token operation. Apigee interprets this as a successful authorization and then can generate a valid oauth token that the caller can then send along to access the protected API.