Row level security using prisma and postgres - postgresql

I am using prisma and yoga graphql servers with a postgres DB.
I want to implement authorization for my graphql queries. I saw solutions like graphql-shield that solve column level security nicely - meaning I can define a permission and according to it block or allow a specific table or column of data (on in graphql terms, block a whole entity or a specific field).
The part I am stuck on is row level security - filtering rows by the data they contain - say I want to allow a logged in user to view only the data that is related to him, so depending on the value in a user_id column I would allow or block access to that row (the logged in user is one example, but there are other usecases in this genre).
This type of security requires running a query to check which rows the current user has access to and I can't find a way (that is not horrible) to implement this with prisma.
If I was working without prisma, I would implement this in the level of each resolver but since I am forwarding my queries to prisma I do not control the internal resolvers on a nested query.
But I do want to work with prisma, so one idea we had was handling this in the DB level using postgres policy. This could work as follows:
Every query we run will be surrounded with “begin transaction” and “commit transaction”
Before the query I want to run “set local context.user_id to 5"
Then I want to run the query (and the policy will filter results according to the current_setting(‘context.user_id’))
For this to work I would need prisma to allow me to either add pre/post queries to each query that runs or let me set a context for the db.
But these options are not available in prisma.
Any ideas?

You can use prisma-client instead of prisma-binding.
With prisma-binding, you define the top level resolver, then delegates to prisma for all the nesting.
On the other hand, prisma-client only returns scalar values of a type, and you need to define the resolvers for the relations. Which means you have complete control on what you return, even for nested queries. (See the documentation for an example)
I would suggest you use prisma-client to apply your security filters on the fields.

With the approach you're looking to take, I'd definitely recommend a look at Graphile. It approaches row-level security essentially the same way that you're thinking of. Unfortunately, it seems like Prisma doesn't help you move away from writing traditional REST-style controller methods in this regard.

Related

How to implement complex permission based data access in Postgres with Postgraphile or alternatives

For a new project, we're currently designing a database and an API to access this. We've already established we'll be using PostgresQL for the database, and want to access it via a GraphQL API.
To ease with maintainability, we looked at several intermediaries between client/API/database, mainly Prisma, PostGraphile and Hasura. PostGraphile stood out, because of ease of use and the focus of handling stuff "in database" as opposed to in your backend code. However, we ran into issues when figuring out how to implement this.
Allow me to expand on what we designed thus far:
Provisional database design:
users table
groups table
roles table:
u_g_r table: A user can be part of multiple groups, and can have multiple roles in each group. This table represents foreign keys for users, groups and roles, as many-to-many relations can exist in virtually all combinations.
Data Permissions:
We want users to grant others access to their personal data in several steps, preferably for each group. For example:
level 3: Yourself and only absolutely necesary people, such as account manager
level 2: Only people in group X, Y, etc
level 1: Everybody
It would be awesome if it was possible to set this for various types of data, for example grant level 2 for your phone number, but only level 1 for your physical address.
So, these levels (1, 2, 3) would accompany data in the database, like phone_number and phone_number_access_level for example. Then, in the u_g_r junction table, each combination of user/group/role would have an allowed level attached to it, which must be higher than the required level for the relevant data. Thus, if your role allowed access to data on level 2, you would be able to view data on level 1 and 2, but not level 3.
Postgres allows both column- and row level security, to let users access certain data. The PostGraphile wiki goes into some detail (here and here) how you would make this work with JWT claims instead of PostGres roles.
Our problem arrives when we want to implement the above features. It seems we want a kind of 'field level security' that does not exist, but I can't imagine others not having had the same issues.
What would you advive us to do? Please let me know if there are options we've missed, or whether there are other options that are better for us!
Implementing this outside the database, in backend code might might be the easiest way in and of itself, but it greatly impacts maintainability for us, as the main luxury of things like PostGraphile for us is removing the need to write GraphQL schema's and resolvers ourselves.
It seems that you want all users to see all table rows, but only certain columns.
You probably cannot use column permissions, because these can only allow or deny access to the column as a whole and do not respect who “owns” a certain table row.
So perhaps views can do what you want, for example:
CREATE VIEW users_view
WITH (security_barrier = true, check_option = local) AS
SELECT /* accessible to everyone */
username,
/* accessible only to certain groups */
CASE WHEN pg_has_role('x', 'USAGE') OR pg_has_role('y', 'USAGE')
THEN level2_col
ELSE NULL
END AS level2_col,
/* accessible only to admins and owner */
CASE WHEN username = current_user OR pg_has_role('admin', 'USAGE')
THEN level3_col
ELSE NULL
END AS level3_col
FROM users;
security_barrier makes sure that nobody can use functoins with side effects to subvert security, and check_option ascertains that nobody can INSERT a row that is not visible to themselves.
You can allow DML operations on the views if you define INSTEAD OF triggers.
Based on the answer of Laurenz Albe, I created an immense view for all kinds of columns. It worked, certainly, and even with several thousands of entries of mock data it was still relatively quick.
When I got back to it last week, a cleaner solution (arguably) dawned on me. Instead of using custom views like this, I'm now using separate tables with the sensitive data, link them with foreign keys and enable Row Level Security on these rows.
I haven't done any benchmarks, but it should be faster as this data isn't always requested anyways. It at least saves complicated views with a lot of boilerplate!

How to deal with complex permissions in Hasura

Basics - I need to return data from columns based on some variables from a different table(I either return column or null if access is not allowed)
I have already done what I need via a custom function in postgres, but the problem is that in Hasura functions share the permission with the table/view it implements SETOF on.
So I have to allow the access to the table itself and as the result permissions in my function are kind of meaningless, because anyone will be able to access the data simply by querying the original table directly.
My current line of thinking is that the only way to do what I need is to create a remote schema and remove access to the original table.
But maybe there is a way to not expose some of the tables as a graphql query? If I could do something like this - I'd just hide my table and expose only a function.
The remote schema seems like it would work.
Another option would be the allow-queries option.
It's possible to limit queries. It's a bit tricky it seems, you need an exact copy of every query that should be allowed (with the fields in the exactly correct order), but if you do that, then only your explicitly whitelisted queries will be accepted. More info in the docs.
I'm not familiar enough with postgres permissions to offer any better ideas...

GraphQL,Cassandra and denormalization strategy

Would a database like Cassandra and scheme like GraphQL work well together?
Cassandra ideology is based on the idea of optimizing your queries and denormalizing data. This doesn't seem to really mesh well with a GraphQL ideology where data seems to be accessible in every level of a query.
Example:
Suppose I architect my Cassandra table like so:
User:
name
address
etc... (many properties)
Group:
id
name
user_name (denormalized user, where we generally just need the name of a user)
But with GraphQL, it's one wouldn't exactly expect a denormalized User.
query getGroup {
group(id: 1) {
name
users {
name
}
}
}
So a couple of things:
1.) This GraphQL query could end up hitting our Cassandra database multiple times (assuming no caching). Getting the group name and for each of the users we might even hit it for each user. But lets say our resolve creates multiple User objects with one cassandra call.
2.) We can't really build a cassandra idiomatic database with denormalization and graphql in mind, can we? Otherwise we should expect certain properties of a User aren't returned to us with the query.
To sum up the question, what's the graphql strategy for working with denormalized data? Is it acceptable to omit certain properties that the client thinks are accessible? E.g the client tries to access address of user but we don't have that at the moment because our data is denormalized. Or should one not even worry about denormalization and just let graphQL make calls with a caching mechanism in between the db and graphql. E.g graphql first gets the group, then gets the user data for the group id.
This is a side effect of GraphQL where a query can get quite complex in retrieving the data. But as long as the user is actually requesting the data they need if you are smart about your resolvers the end result will actually be faster.
Consider tools like dataloader to cache when resolving a query.
As far as omitting certain properties graphql validates the response and will throw an error, although it will also return the data you gave. It would probably be better to implement some sort of timeout and throw a more descriptive error if there is an issue retrieving the data.

SQL Server - Return rows based on user role

We are developing an Access application with a SQL Server backend. We have a table that has records that belong to division A, B or C. The users also belong to role A, B or C. We want each user to see only their corresponding division records, as well as only certain columns.
I've thought of two ways, one making different queries for each role and then, based on the user's role, change the source object of the form. However I don't know if it is possible to retrieve it from SQL SERVER with VBA (all VBA documentation I've found so far is quite lacking).
The other solution I thought was to implement this on the server, however I don't know how a T-SQL query or view could fetch only the information needed based on the user's role
Any ideas?
PS: I can't use functions or stored procedures. For some reason the SQL Server we have been provided has them disabled and IT Ops won't enable them (Don't know the logic behind that).
Okay, it's been a while since I posted this but I'll post the solution I came up with in the end. VBA is not quite necessary in this case. It can be done perfectly with views.
To retrieve the users roles, (inner) join the table database_role_members twice with the database_principals one. Join by Id (from database_principals) on both fields. With this, you get a list of all roles and their corresponding users. To get the roles of the user querying the database simply add a where clause that checks that the user name corresponds with the function USER_NAME.
Then, don't give permission to those roles to access the table we want to restrict access to. Instead, make a view that fetches info from that table and add a where clause that looks up the value from a column against the query that retrieves the user roles.
With this you can make a link in access to the view and will allow you to see only the records that correspond to the user roles.
While this approach is easy, it doesn't allow for more complicated row level security. For a more powerful approach it might be useful to check the following link.
https://msdn.microsoft.com/en-us/library/dn765131.aspx
You could create the same tables with different schemas and assign user rights to different schemas. For example, instead of using dbo.Users you could have Accounting.Users and Warehouse.Users. Assign users in an accounting group to the Accouting schema. Or as suggested above those could be views within a schema that select data from underlying tables.

How to inspect every query going to DB from Zend Framework

I have a complex reporting application that allows clients to login and view reports for their client data. There are several sections of the application where there are database calls, using various controllers. I need to make sure that client A doesn't get client B's information via header manipulation.
The system authenticates, and assignes them a clientID and roleID. If your roleID >1, that means you work for the company hosting the data, and you can see all client info. I want to create a catch-all that basically works like this:
if($roleID > 1) {
...send query to database
}else {
if(...does this query select a record with clientID other than my $auth->clientID){
do not execute query
}else {
execute query
}
}
The problem is, I want this to run for every query that goes to the server... how can I place this code as a "roadblock" between the application and the DB? I already use Zend_Profiler to look at queries, so I know it is somehow possible, but cannot discern this from the Profiler code...
I can always write an authentication function and pass selected queries that way, but this catch-all would be easier to implement across all of the calls and would be future proof. Any help is appreciated.
it's application design fault.
you shoud use 'service architecture' - the only one entry point for queries would be a service. and any checks inside it.
If this is something you want run on every query, I'd suggest extending Zend_Db_Select and overwrite either the query() or assemble() functions to add in your logic. You'll also want to add a way for it to be aware of your $auth object.
Another option is to extend your database adapter so you can intercept the queries directly. IMO, you should try and do this at the application level though.
Depending on your database server, you can put a trace on the DB side.
Here's an example for Oracle:
http://orafaq.com/wiki/SQL_Trace