I have a Jhipster running inside of a docker in a server, and the other side as an authentication server I have a docker running. Each time that a user wants to connect, Jhipster does the authentication with Keycloak, generating a user access token. Basically is the solution proposed to Jhipster on their web page.
Problem:
The problem is that every time I change the Jhipster application, I compile and build a new docker image, that replaces the old one, and everyone that has an active access token needs to create a new access token.
Does someone know how can change this behavior or create a token that keeps active even after the new deployment?
Related
I have a webapp that uses Keycloak for user management and auth provider successfully.
The same application requires a CLI tool for some operations (similar to the gcloud CLI + web console).
I've implemented the CLI part using the OIDC Authorization Code Flow that opens the browser for the user to authenticate. It works like a charm.
However, if the user logoff from the browser, Keycloak will invalidate the session and the cli will have to re-authenticate to get a new access_token and refresh_token.
My question here is, how can I force the CLI app login to create a new session separate from the browser session.
Or, if not possible, what's the correct way of achieving this?
Eventually, found out that I just have to add the scope offline_access to the list of scopes I am requesting. Keycloak will then create a new offline session (bad name for the feature, Offline just means that the user doesn't have to be present, but all the refreshes happen the same way)
https://github.com/keycloak/keycloak-documentation/blob/main/server_admin/topics/sessions/offline.adoc
My app uses a SPA client and Phoenix/Elixir backend, with jwt authentication (via Guardian library). The app is deployed using Docker on GCP.
I'm having the below issue:
I'm an authenticated user that has been issued a jwt. Everything works fine.
The production application's docker image is rebuilt, redeployed, and the server is restarted.
My jwt token issued before the rebuild is no longer valid.
I'm having trouble finding what would be causing this. Looks like the secret key used in config.exs Guardian config will always be the same across builds.
Any help is appreciated!
Either the contents of the payload are being used to validate the message, and some field has changed in a way that the JWT is considered invalid by the server, or the secret actually has changed and your assertion is not correct.
The way I would problem solve this is by using a pre-developed tool to verify the JWT. Either your secret key can be used to validate the signature or it can't. No need to "guess".
We are using Keycloak 3.4.0 / Keycloak.js in our single page app. Keycloak stores its data within a MariaDB.
When I restart the Keycloak server (NOT MariaDB) and refresh my single page app I am redirected to the login page. I thougt that Keycloak stores all tokens within its database, shouldn't these tokens still be valid after a restart? Or is it expected that all sessions are logged out?
Do I have to use offline tokens to support this scenario?
The offline token is valid even after a user logout or server restart.
https://www.keycloak.org/docs/3.4/server_admin/index.html#_offline-access
This is written by one of the members of the Keycloak development team:
The JPA user session provider was
dropped (performance was horrible so we deemed it unusable). The user
session persister is only used for offline sessions, they survive a server
restart.
So yes, it seems like they removed it because of performance related issues. Here you've got the whole thread.
I use this tutorial to deploy a business network on a free bluemix cluster: https://ibm-blockchain.github.io/
I also deploy the REST Server and communicate via Web apps.
All went fine till yesterday. The REST Server was not accessible anymore.
I deleted everything on the cluster using the script delete_all available in the ibm-container-service repository.
I followed the install procedure using the create_all script. I could access the composer playground (port 31080) again but was not really able to deploy an online business network using the "profile" hlfv1. Now it asks at the bottom of the "deploy UI" for credentials.
I don't know what to fill in. I tried to use ID+Password. On this way I was able to deploy but I got access error by clicking on "connect now". I was able to start the REST server then but if i try to access it in the browser (port 31090), I get the feedback that I'm not authorized.
Any ideas?
And do you know which changes have been made in the last month, which could bring these troubles?
Thx
Phil
The tutorial pointed to only covers playground when used with a Web Browser connection not a real fabric. When you deploy to a real fabric you have to provide an initial identity that you want bound to an initial participant in the business network. The initial participant will be of type org.hyperledger.composer.system.NetworkAdmin and given a name of the initial identity name you provide.
This dialog looks like this
To get you started you should select the ID and Secret radio button. Then for Enrollment ID enter admin and for the Enrollment Secret enter adminpw.
This is the name and secret of the bootstrap identity that exists in the fabric-ca server that has been deployed as part of the scripts.
By providing this information that identity will be enrolled and it's public certificate will be bound to a NetworkAdmin participant which will be called admin. This identity admin will then have access to the business network as only identities that are bound to a participant in the business network can have any sort of access.
I've created a new cluster with AAD for client auth using ARM by following the document linked to below. The cluster deployed and my app works fine but my browser is still asking me to select an X.509 certificate when I attempt to use the SF Explorer at: https://mycluster.northcentralus.cloudapp.azure.com:19080/Explorer
I thought when I hooked up Azure AD that the client cert would no longer be needed. Note that I do see that the SF Explorer displays my name in the upper right (with a logout option), indicating to me it's using AAD.
So, what's up with this? Any ideas?
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-creation-via-arm/
That happens if there's an issue with AAD authentication - the cluster will fallback to certificate authentication.
If SF Explorer isn't re-directing to an AAD login page at all, then double-check that the web application reply URL in the AAD cluster application matches the SF Explorer URL.
If the re-direction is happening and AAD login was successful, then double-check that the AAD cluster application has the expected user roles and that your user has been assigned a role.