PostgresSQL|Scala: Any efficient way to interact using different Users for different queries with heavy ACL use - postgresql

My whole interest in PostgreSQL is driven by its ACL system which is really powerful.
To access the data in a Scala application I have two options in mind EBeans/JDBC or Slick FRM.
Our application is an enterprise one and has more than 1000 users, who will be accessing it simultaneously, having different roles and access permissions. The current connectors, I am aware of, ask for database username/password at the time of connection building, and I haven't found these providing any facility to change the username/password on the fly as we will be getting the user reference from session object of the user accessing our server.
I am not sure how much the title of the question makes sense, but I don't see recreating(or separately creating) a database connection for every user as an efficient solution. What I am looking for is a library or toolkit which lets us supply the interacting sub-user/ROLE in options parameter using which PostgreSQL can do its ACL enforcing/check on data/manipulation requested.

You can "impersonate" a user in Postgres during a transaction and reset just before the transaction is done using the SET ROLE or SET SESSION AUTHORIZATION commands issued after establishing a database connection.

Related

Optimal solution for managing tens of thousands users of a desktop app accessing a PostgreSQL database

I would like to develop a specific app that can be used to access a database developed in PostgreSQL. The app performs calculations and asks for the required data from the database server.
The user can download the app from a website if he has registered. After starting the app, the user has to log in to be able to use it.
Now the question:
What would be the most sensible solution in this example?
To be honest, I don't want to create a separate role for each user.
My idea is that the app only accesses the database via a general role, for example with the name "usership". With this role, a user only has well-defined read access. It is possible that users should also be able to save their own settings or measured values ​​under their user name in certain tables. Access would then only be possible with the correct user name and password, which are specified with each operation on the relevant tables (however, this effort would not be necessary for read-only access to other tables with general data).
The question is whether there are any limits to how many apps can communicate with the database at the same time via the same database credentials / username "usership".
I don't want to have to create a separate DB role for each customer. Somehow that doesn't seem right to me, if only because adding new employees or deleting them means major interventions in the database schema (create / drop role). Basically, the app should do nothing else than a website where several users are logged in at the same time, the only difference being that the app does not run in the browser and everything works either on the client side at the application level or on the database server.
I'm not aware of any limits on sharing of usernames + passwords in postgres. You can have hundreds or thousands of concurrent connections using the same username + password.
There can be issues with many hundreds or thousands of concurrent connections, depending on your database hardware, especially ram.
While Postgres supports thousands of concurrent connections in theory, in practice I've run into memory issues as the # of open connections increases. If this is a problem and a large % of your connections are idle at any one moment, you can add a layer of connection pooling with something like pgbouncer, but keep in mind that adds another process to monitor.
In general, however, I wouldn't recommend this approach. You'd be providing direct, essentially anonymous access to your shared database. I expect it would be difficult to secure your database credentials in the client, and with direct access it should be fairly easy to construct SQL queries that would take down your database server. This would be difficult to monitor or prevent against since all users would be the same and you'd have no way to revoke access in case of abuse (without changing the password for everyone that has access).
From a security standpoint I'd definitely recommend being able to identify your users, monitor their usage separately and revoke access individually. I don't know of any performance issues with having many thousands of separate postgres users/credentials.
-- Scalability --
Using a postgres cluster with read replicas and load balancing (e.g. https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/) you should be able to scale this horizontally fairly easily if the need arises.

Recommendations for multi-user Ionic/CouchDB app

I need add multi-user capability to my single-page mobile app developed with Ionic 1, PouchDB and CouchDB. After reading many docs I am getting confused on what would be the best choice.
About my app:
it should be able to work offline, and then sync with the server when online (this why I am using PouchDB and CouchDB, working great so far)
it should let the user create an account with a username and password, which would then be stored within the app so that he does not have to log in again whenever he launches the app. This account will make sure his data are then synced on the server in a secure place so that other users cannot access it.
currently there is no need to have shared information between users
Based on what I have read I am considering the following:
on the server, have one database per user, storing his own data
on the server, have a master database, storing all the data of all users, plus the design docs. This makes it easy to change the design docs in a single place, and have them replicated on each user database (and then within the PouchDB database in the app). The synchronization of data, between the master and the user DBs, is done through a filter, so that only the docs belonging to one user (through some userId field) are replicated to this user's database only
use another module/plugin (SuperLogin? nolanlawson/pouchdb-authentication?) to manage the users from the app (user creation, login, logout, password reset, email notification for password lost, ...)
My questions:
do you think this architecture is appropriate, or do you have something better to recommend?
which software would you recommend for the users management? SuperLogin looks great but needs to run on a separate HTTP server, making the architecture more complex. Does it automatically create a new database for each new user (I don't think so)? Nolanlawson/pouchdb-authentication is client-only, but does it fit well with Ionic 1? Isn't there a LOT of things to develop around it, that come out of the box with SuperLogin? Do you have any other module in mind?
Many thanks in advance for your help!
This is an appropriate approach. The local PouchDBs will provide the data on the client side even if a client went offline. And the combination with a central CouchDB server is a great to keep data synchronized between server and clients.
You want to store the users credentials, so you will have to save this data somehow on your client side, which could be done in a separate PouchDB.
If you keep all your user data in a local PouchDB database and have one CouchDB database per user on the server, you can even omit the filter you mentioned, because the synchronization will only happen between this two user databases.
I recommend SuperLogin. Yes, you have to install NodeJS and some extra libraries (namely morgan, express, http, body-parser and cors), and you will have to open your server to at least one new port to provide this service. But SuperLogin is really powerful to manage user accounts and user databases on a CouchDB server.
For example, if a user registers, you just make a call to SuperLogin via http://server_address:port/auth/register, query the user name, password etc. and SuperLogin not only adds this new user to the user database, it also creates automatically a new database only for this user. Each user can have multiple databases (private or shared) and SuperLogin manages the access rights to all these databases. Moreover, SuperLogin can also send confirmation emails or resend forgotten passwords (an access token, respectively).
Sure, you will have to configure a lot (but, hey, at least you have all these options), and maybe you even have to write some additional API for functionality not covered by SuperLogin. But in general, SuperLogin saves a lot of pain regarding the development of a custom user management.
But if you are unsure about the server configuration, maybe a service such as Couchbase, Firebase etc. is a better solution. These services have also some user management capabilities, and you have to bother less with server security.

Postgres connection pooling - multiple users

In order to secure our database we create a schema for each new customer. We then create a user for this schema and when a customer logs in via the web we use their user and hence prevent them gaining access to other areas of the database.
Our issue is with connection pooling as it is a bit inefficient to keep creating/dropping new connections for these users. We would like to have a solution that can work across many hundreds of different database users.
We've looked at pg_bouncer, but the issue here is that we have to create a text record in an ini file for each user and restart pg_bouncer every time we set up a customer. This is not a great solution.
Is there an alternative solution that works in real time and would mean a customers connection/connection(s) would stay in the pool whilst they were active?
According to the latest release notes pgbouncer might actually do this. But I haven't tried.
Pooling mode can be configured both per-database and per-user.
As for use case in general. We also had this kind of issue a while ago. We just went with connection pooling with one user/database and multiple schemas. Before running psql query we just used SET search_path TO schemaName. As for logging, we had compliance mode, when we could log activity per customer and save it in appropriate schema.

Is there a way of connecting to shared OpenEdge RDBMS with read only access?

Our new security policies require data access restriction for developers to the production database. Setting up -RO parameter does not work for several reasons (extracts from 'Startup command and Parameter reference' http://documentation.progress.com/output/OpenEdge102b/pdfs/dpspr/dpspr.pdf)
1) "If you use the -RO parameter when other users are updating the database, you might see invalid data, such as stale data or index entries pointing to records that have been deleted."
2) "A read-only session is essentially a single-user session. Read-only users do not share database resources (database buffers, lock table, index cursors)."
3) "When a read-only session starts, it does not check for the existence of a lock file for the database. Furthermore, a read-only user opens the database file, but not the log or before-image files.
Therefore, read-only user activity does not appear in the log file."
We would like to be able to access data on the production database from OpenEdge Architect, but not being able to edit data. Is it possible?
In most security conscious companies developers are not allowed to access production. Period. Full stop.
One thing that you could do as a compromise... if the need is to occasionally query data you could give them access to a replicated database via OpenEdge Replication Plus. This is a read-only db connection without the drawbacks of -RO. It is real-time, up to date and access is separately controlled -- you could, for instance, put the replicated db on a different server that is on a different subnet.
The short answer is no, they can't access it directly and read-only.
If you have an appserver, you could write some code which would provide a level of dynamic RO data access via appserver or webservice calls.
The other question I'd have is - what are your developers doing accessing the production database? That should be a big no-no.

How to implement Tenant View Filter security pattern in a shared database using ASP.NET MVC2 and MS SQL Server

I am starting to build a SaaS line of business application in ASP.NET MVC2 but before I start I want to establish good architecture foundation.
I am going towards a shared database and shared schema approach because the data architecture and business logic will be quite simple and efficiency along with cost effectiveness are key issues.
To ensure good isolation of data between tenants I would like to implement the Tenant View Filter security pattern (take a look here). In order to do that my application has to impersonate different tenants (DB logins) based on the user that is logging in to the application. The login process needs to be as simple as possible (it's not going to be enterprise class software) - so a customer should only input their user name and password.
Users will access their data through their own sub-domain (using Subdomain routing) like http://tenant1.myapp.com or http://tenant2.myapp.com
What is the best way to meet this scenario?
I would also suggest using two database, a ConfigDB and a ContentDB.
The ConfigDB contains the tenant table and the hostname, databasename, sql username and sql password of the Content database for each of tenants in this table and is accessed via a seperate sql user called usrAdmin
The ContentDB contain all the application tables, segmented on the SID (or SUSER_ID) of the user and is access by each tenants sql user called usrTenantA, usrTenantB, usrTenantC etc.
To retrieve data, you connect to the ConfigDB as admin, retrieve the credentials for the appropriate client, connect to the server using the retrieved credentials and then query the database.
The reasons i did this is for horizontal scalability and the ability to isolate clients upon demand.
You can now have many ContentDBs, maybe with every ten tenants that sign up you create a new database, and configure your application to start provisioning clients in that database.
Alternatively you could provision a few sql servers, create a content DB on each and have your code provision tenants on which ever server has the lowest utilization historically.
You could also host all your regular clients on server A and B, but Server C could have tenants in their own INDIVIDUAL databases, all the multitenancy code is still there, but these clients can be told they are now more secure because of the higher isolation.
The easiest way is to have a Tenants table which contains a URL field that you match up for all queries coming through.
If a tenant can have multiple URL's, then just have an additional table like TenantAlias which maintains the multiple urls for each tenant.
Cache this table web side as it will be hit a lot; invalidate the cache whenever a value changes.
You can look at DotNetNuke. It is an open source CMS that implements this exact model. I'm using the model in a couple of our apps at it works well.
BTW, for EVERY entity in your system you'll need to have a tenantid column acquired for the above table.