Keycloak user session persistence - keycloak

When I'm logged in, I dont see any active session in the database in the tables USER_SESSION and CLIENT_SESSION.
But at the same time, the active session is displayed in the Sessions tab in the Keycloak Admin Panel.
Where is this session stored: in memory?
And if it is stored in memory, then how to make it stored in the database table?

https://www.keycloak.org/docs/latest/server_installation/#cache
In Keycloak we have the concept of authentication sessions. There is a separate Infinispan cache called authenticationSessions.
So correct. DB is not used for the sessions, also "in memory" is not used. Infinispan cache is used.
Of course nobody is stopping you to rewrite whole Keycloak caching to store sessions in the DB. But this will be quite big task. I would say there is a reason why devs have choosed Infinispan and not DB, so I wouldn't change it at all.
It is not clear, why you need to store sessions in the DB. If it is for the persistance, then Infinispan offers configuration for persistent store - e.g. you can use JDBC driver and save data into relational DB.

Related

PostgresSQL|Scala: Any efficient way to interact using different Users for different queries with heavy ACL use

My whole interest in PostgreSQL is driven by its ACL system which is really powerful.
To access the data in a Scala application I have two options in mind EBeans/JDBC or Slick FRM.
Our application is an enterprise one and has more than 1000 users, who will be accessing it simultaneously, having different roles and access permissions. The current connectors, I am aware of, ask for database username/password at the time of connection building, and I haven't found these providing any facility to change the username/password on the fly as we will be getting the user reference from session object of the user accessing our server.
I am not sure how much the title of the question makes sense, but I don't see recreating(or separately creating) a database connection for every user as an efficient solution. What I am looking for is a library or toolkit which lets us supply the interacting sub-user/ROLE in options parameter using which PostgreSQL can do its ACL enforcing/check on data/manipulation requested.
You can "impersonate" a user in Postgres during a transaction and reset just before the transaction is done using the SET ROLE or SET SESSION AUTHORIZATION commands issued after establishing a database connection.

Is it safe to enable MSDTC (Microsoft Distributed Transaction Coordinator) on our server

I'm always worried about using two instances of DbContext that would require distributed transaction coordinator, especially when I'm using a library like SimpleMembership which has its own connection to the database.
This problem is always an issue in my case. For example I'm using SimpleMembership provider for my application's user accounts and I want to save a user with additional information, like company. So without MSDTC enable I can't do this inside a transaction. So it is possible that inconsistent data is inserted into the database.
So my question is should I worry about this problem and try to find a better solution or I can just enable MSDTC on my server and don't worry about it? Is there any consequences of enabling MSTDC ?
Thanks!

PersistenceContext propagation

I'm migrating an application from desktop to web. In the desktop application, users connect to an Oracle database using different database users, ie users are managed by Oracle, not within a database table. All use the same scheme to store and manage data, PLMU_PROD.
I have to implement authentication (JPA) for the Web application and, as I read, I have to create a EntityManagerFactory for each database user.
The other option I'm thinking is to create a table of users / passwords and use the same EntityManagerFactory to serve all EntityManager, as all users will access the same data that is in the scheme PLMU_PROD.
I wonder if the PersistenceContext is shared between different EntityManagerFactories, as my web server has little RAM and do not want to waste it having duplicate entities.
Thanks for your time!
What you seem to be referring to is caching. JPA requires that EntityManagers keep entities cached so that they can track changes. So each EntityManager is required to have its own cache, keeping changes made in one separate from changes that might be made concurrently in others - transaction isolation. Within EclipseLink, there is a concept of a second level cache that is shared at the EMFactory level. http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching is a good document on caching in EclipseLink. This second level cache helps avoid database access and can be disabled as required. If your EntityManagers do not need to track changes, such as if the application is read-only and the entitys are not modified, you can set queries to return entities from the shared cache so that only a single instance of the data exists using the read-only query hint: http://www.eclipse.org/eclipselink/documentation/2.4/jpa/extensions/q_read_only.htm#readonly
Read-only instances can allow avoiding duplication and using resources unnecessarily, but you will need to manage them appropriately and get managed copies from the EntityManager before making changes.

Encrypt PostgreSQL database and obtain key from web application user

I would like to create a web application using PostgreSQL as a database. I want to have the database encrypted, so that even an attacker that has root access to the database server can't decrypt the data (or at least he would have to mess around with temporary in-memory data which is hard). I don't care about the schema, only about the content of the tables.
I also don't want to store the decryption key somewhere on the application server (neither in a config file, nor hardcoded).
Instead, my idea was to encrypt the whole database (or just tables and rows?) using a key that is provided by the user over the web application and that decrypts at runtime.
Is this scenario possible with PostgreSQL and which options do I have implement this?
Side note: It's a .NET based application (ASP.NET MVC3) and I'm using the Npsql driver.
Use pgcrypto for encryption. But, a superuser can control the log files and tell the database to log everything, every query. And that will include the queries including your passwords.
You might want to use SELinux and SEPostgreSQL.

Is there a way of connecting to shared OpenEdge RDBMS with read only access?

Our new security policies require data access restriction for developers to the production database. Setting up -RO parameter does not work for several reasons (extracts from 'Startup command and Parameter reference' http://documentation.progress.com/output/OpenEdge102b/pdfs/dpspr/dpspr.pdf)
1) "If you use the -RO parameter when other users are updating the database, you might see invalid data, such as stale data or index entries pointing to records that have been deleted."
2) "A read-only session is essentially a single-user session. Read-only users do not share database resources (database buffers, lock table, index cursors)."
3) "When a read-only session starts, it does not check for the existence of a lock file for the database. Furthermore, a read-only user opens the database file, but not the log or before-image files.
Therefore, read-only user activity does not appear in the log file."
We would like to be able to access data on the production database from OpenEdge Architect, but not being able to edit data. Is it possible?
In most security conscious companies developers are not allowed to access production. Period. Full stop.
One thing that you could do as a compromise... if the need is to occasionally query data you could give them access to a replicated database via OpenEdge Replication Plus. This is a read-only db connection without the drawbacks of -RO. It is real-time, up to date and access is separately controlled -- you could, for instance, put the replicated db on a different server that is on a different subnet.
The short answer is no, they can't access it directly and read-only.
If you have an appserver, you could write some code which would provide a level of dynamic RO data access via appserver or webservice calls.
The other question I'd have is - what are your developers doing accessing the production database? That should be a big no-no.