We are moving right along with building out our custom IdentityServer solution based on IdentityServer3. We will be deploying in a load balanced environment.
According to https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html there are a number of services and stores that need to be implemented.
I have implemented the mandatory user service, client and scope stores.
The document says there are other mandatory items to implement but that there are default InMemory versions.
We were planning on using the default in memory for the other stuff but am concerned that not all will work in a load balanced scenario.
What are the other mandatory services and stores we must implement for things to work properly when load balanced?
With multiple Identity Server installations serving the same requests (e.g. load balanced) you won't be able to use the various in-memory token stores, otherwise authorization codes, refresh tokens and reference tokens issued by one server won't be recognized by the other, nor will user consent be persisted. If you are using IIS, machine key synchronization is also necessary to have tokens work across all instances.
There's an entity framework package available for the token stores. You'll need the operational data.
There's also a very useful guide to going live here.
Related
For securing a frontend application, I created a new Keycloak client with a custom configuration:
mapper which includes "client roles"
scope configuration
client-specific roles (composite and non-composite roles)
This setup works fine in the local development setup. Now we need to transfer this configuration to the other environments like develop/preproduction/production stage.
As far as I understand, Keycloak offers the following exports:
Complete realm
Specific client
It looks as if both apporaches have some major drawbacks. Either I would need to overwrite the complete realm (which I definitely don't want to do in production) or I can import the basic client configuration which is missing all the roles.
And as soon as we, for example, add more roles later on, then we would need to re-configure all stages manually.
Is there some "good practice" how to deal with that? Does keycloak offer some kind of "sync" between stages?
I thought it is hard answer question.
it is compare API call vs UI configuration.
Disadvantage of API call I prefer API call but it takes a time to figure out API function and call order is matter and some properties missing in parent have to set detail in child, complicated structure API URL path ( example id/property/id/property), require more deep of knowledge for Keycloak.
Advantage of API call more fine tunning fast, easy organize from top to bottom (example configure client, Auth resources, auth scopes, policies and permissions to other environment), can transfer 100% configuration.
Disadvantage of UI configuration - not flexible, if un-match, id makes error, can't update/add a partial data (example get client's resource missing it's scopes - it have to set by separate API call), can't move 100% configuration from source to target environment, can make human error
Advantage of UI configuration - easy, quick even manual
My preference is API call - using Postman (single API call or running correction for a sequence of API call - at the local and develop stage, can simple unit test and check HTTP status) and curl call with Bash Schell for higher stage. If check condition of target, can handle scenario based transfer(example already setting, skip that configuration)
One more tips, If using a debug section by F12 in Chrome or Firefox, can see the API call in network tab. It saves time to figure out API call methods and payload/response JSON data.
If I save API keys to Flutter_secure_storage, they must be exposed in the first place. How could they be pre-encrypted or saved to secure storage without exposing them initially?
I want to add a slight layer of security where keys are stored securely, only to be exposed when making an API call. But if I have keys hardcoded then they are exposed even if only at initial app run. How do you get around this logic?
To avoid exposing API key, you should store keys in a '.env' file and use flutter_dotenv package to access it while making API calls. Although this method will not help when making API call. If you really want to secure exposing keys, you should move the API calls to the backend so those network calls cannot be seen by the client.
If this is a web project, you could use something like base64 on both ends, then debase and save like this:
SERVER ON PHP
apiKeyEncoded = base64_encode(apiKeyGenerator());
CLIENT:
apiKeyEncoded = await getApiKey();
apiKeyDecoded = base64Decode(apiKeyEncoded).toString(); //this is the usable one, save it.
Now, if the project is focused on mobile use, I don't think you actually need to implement this, tho the code would be the same.
I will add some input to this. I am using Parse Back4App which exposes app API keys in the same way that firebase does. I have discovered a few very important security designs which may help with this.
Client side
Don't worry about app API keys being abused. Firebase/Back4App both have some security features in place for this including DoS & DDoS security features.
Move ALL actual API calls to server and call from client via cloud code. If you want to go to the extreme, create a user-device hash code for custom client rate limiting.
Server side
LOCK DOWN ALL CLPs, ALL ACLs, basically lock ALL PERMISSIONS and ONLY allow cloud calls with heavy security checks authorized access to anything server side including outside API calls.
Make API calls from your server only. Better yet, move your API calls outside cloud calls & create "cloudJobs", these run on schedule with Back4App and you can periodically call whatever API from server. Example: a crypto currency app might update prices once per second, once per minute etc. server gets these updates and pushes to clients. No risk of someone getting your crypto API keys and running the limits.
Put in a custom rate-limiting design & design around this so your rate limits would never trip under normal circumstances. If they do trip in excess, ban user & drop their requests.
Also put API keys in .env file on server. Go a step further & use a key encryption hardware service.
It would be a tell-tale sign that your server is compromised if your API keys get abused with this structure.
Want further DoS & DDoS protection? Mirror your server a few times and create a structure whereby client requests can be redirected under attack times or non-DDos/DoS attacking clients receive new app API keys.
... I could go on and on about security & what I've learned but I'll leave it at that.
I am investigating a slow login time and some profile synchronisation problems of a large enterprise AEM project. The system has around 1.5m users. And the website is served by 10 publishers.
The way this project is built, is that they have enabled the SAML_login for all these end-users and there is a third party IDP which I assume SAML_login talks to. I'm no expert on this SSO - SAML_login processes, so I'm trying to understand if this is the correct way to go at the first step.
Because of this setup and the number of users, SAML_login call takes 15 seconds on avarage. This is getting unacceptable day by day as the user count rises. And even more importantly, the synchronization between the 10 publishers are failing occasionally, hence some of the users sometimes can't use the system as they are expected to.
Because the users are stored in the JCR for SAML_login, you cannot even go and check the home/users folder from crx browser. It times out as it is impossible to show 1.5m rows at once. And my educated guess is, that's why the SAML_login call is taking so long.
I've come accross with articles that tells how to setup SAML_login on AEM, and this makes it sound legal for what it is used in this case. But in my opinion this is the worst setup ever as JCR is not a well designed quick access data store for this kind of usage scenarios.
My understanding so far is that this approach might work well but with only limited number of users, but with this many of users, it is an inapplicable solution approach. So my first question would be: Am I right? :)
If I'm not right, there is certainly a bottleneck somewhere which I'm not aware of yet, what can be that bottleneck to improve upon?
The AEM SAML Authentication handler has some performance limitations with a default configuration. When your browser does an HTTP POST request to AEM under /saml_login it includes a base 64 encoded "SAMLResponse" request parameter. AEM directly processes that response and does not contact any external systems.
Even though the SAML response is processed on AEM itself, the bottle-necks of the /saml_login call are the following:
Initial login where AEM creates the user node for the first time - you can look at creating the nodes ahead of time. You could write a script to create the SAML user nodes (under /home/users) in AEM ahead of time.
During each login when the session is first created - a token node is created under the user node under /home/users/.../{usernode}/.tokens - this can be avoided by enabling the encapsulated token feature.
Finally, the last bottle-neck occurs when it saves the SAMLResponse XML under the user node (for later use required for SAML-based logout). This can be avoided by not implementing SAML-based logout. The latest com.adobe.granite.auth.saml bundle supports turning off the saving of the SAML response. Service packs AEM 6.4.8 and AEM 6.5.4 include this feature. To enable this feature, set the OSGI configuration properties storeSAMLResponse=false and handleLogout=false and it would not store the SAML response.
I need to write test for some JAX RS web service that asserts that certain value is cached in the session from disk on the first request in the session.
The testing process does not have access to the tested process. The use case involves using REST API to invoke services.
I can think of several options to proceed with:
Create a REST endpoint just for testing, and query there the needed session value.
Write and then read a log message.
I am aware that I am trying to test an implementation detail via an external API which does not provide contract for this detail, but currently I'm a bit constrained about which processes may be run by the testing infrastructure.
Are there any additional seams to exploit for testing, and what general good practice exists for this scenario?
I just came up with the idea of changing the cached resource and using the change in the behavior.
I need add multi-user capability to my single-page mobile app developed with Ionic 1, PouchDB and CouchDB. After reading many docs I am getting confused on what would be the best choice.
About my app:
it should be able to work offline, and then sync with the server when online (this why I am using PouchDB and CouchDB, working great so far)
it should let the user create an account with a username and password, which would then be stored within the app so that he does not have to log in again whenever he launches the app. This account will make sure his data are then synced on the server in a secure place so that other users cannot access it.
currently there is no need to have shared information between users
Based on what I have read I am considering the following:
on the server, have one database per user, storing his own data
on the server, have a master database, storing all the data of all users, plus the design docs. This makes it easy to change the design docs in a single place, and have them replicated on each user database (and then within the PouchDB database in the app). The synchronization of data, between the master and the user DBs, is done through a filter, so that only the docs belonging to one user (through some userId field) are replicated to this user's database only
use another module/plugin (SuperLogin? nolanlawson/pouchdb-authentication?) to manage the users from the app (user creation, login, logout, password reset, email notification for password lost, ...)
My questions:
do you think this architecture is appropriate, or do you have something better to recommend?
which software would you recommend for the users management? SuperLogin looks great but needs to run on a separate HTTP server, making the architecture more complex. Does it automatically create a new database for each new user (I don't think so)? Nolanlawson/pouchdb-authentication is client-only, but does it fit well with Ionic 1? Isn't there a LOT of things to develop around it, that come out of the box with SuperLogin? Do you have any other module in mind?
Many thanks in advance for your help!
This is an appropriate approach. The local PouchDBs will provide the data on the client side even if a client went offline. And the combination with a central CouchDB server is a great to keep data synchronized between server and clients.
You want to store the users credentials, so you will have to save this data somehow on your client side, which could be done in a separate PouchDB.
If you keep all your user data in a local PouchDB database and have one CouchDB database per user on the server, you can even omit the filter you mentioned, because the synchronization will only happen between this two user databases.
I recommend SuperLogin. Yes, you have to install NodeJS and some extra libraries (namely morgan, express, http, body-parser and cors), and you will have to open your server to at least one new port to provide this service. But SuperLogin is really powerful to manage user accounts and user databases on a CouchDB server.
For example, if a user registers, you just make a call to SuperLogin via http://server_address:port/auth/register, query the user name, password etc. and SuperLogin not only adds this new user to the user database, it also creates automatically a new database only for this user. Each user can have multiple databases (private or shared) and SuperLogin manages the access rights to all these databases. Moreover, SuperLogin can also send confirmation emails or resend forgotten passwords (an access token, respectively).
Sure, you will have to configure a lot (but, hey, at least you have all these options), and maybe you even have to write some additional API for functionality not covered by SuperLogin. But in general, SuperLogin saves a lot of pain regarding the development of a custom user management.
But if you are unsure about the server configuration, maybe a service such as Couchbase, Firebase etc. is a better solution. These services have also some user management capabilities, and you have to bother less with server security.