Check for object ownership with Prisma - prisma

I'm new to working with Prisma. One aspect that is unclear to me is the right way to check if a user has permission on an object. Let's assume we have Book and Author models. Every book has an author (one-to-many). Only authors have permission to delete books.
An easy way to enforce this would be this:
prismaClient.book.deleteMany({
id: bookId, <-- id is known
author: {
id: userId <-- id is known
}
})
But this way it's very hard to show an UnauthorizedError to the user. Instead, the response will be a 500 status code since we can't know the exact reason why the query failed.
The other approach would be to query the book first and check the author of the book instance, which would result in one more query.
Is there a best practice for this in Prisma?

Assuming you are using PostgreSQL, the best approach would be to use row-level-security(RLS) - but unfortunately, it is not yet officially supported by Prisma.
There is a discussion about this subject here
https://github.com/prisma/prisma/issues/5128
As for the current situation, to my opinion, it is better to use an additional query and provide the users with informative feedback rather than using the other method you suggested without knowing why it was not deleted.
Eventually, it is up to you to decide based on your use case - whether or not it is important for you to know the reason for failure.

So this question is more generic than prisma - it is also true when running updates/deletes in raw SQL.
When you have extra where clauses to check for ownership, it's difficult to infer which of the clause(s) caused that if the update does not happen, without further queries.
You can achieve this with row level security in postgres, but even that does not come out the box and involves custom configuration to throw specific exceptions when rows are not found due to row level security rules. See this answer for more detail.
I tend to think that doing customised stuff like this is rarely worth the tradeoff, unless you need specialised UX for an uncommon circumstance.
What I would suggest instead in this case is to keep it simple and just use extra queries to check for ownership, but optimise the UX optimistically for the case where the user does own the entity and keep that common and legitimate usecase to a single query.
That is, catch the exception from primsa (or the fact that the update returns 0 rows, or whatever it is in different cases), and only then run a specific select for ownership, to check if that was the reason the update failed.
This is a nice tradeoff because it keeps things simple, and only runs extra queries in the (usually) far less common failure case.
Even having said all that, the broader caveat as always is that probably the extra queries simply won't matter regardless! It's, in 99% of cases, probably best to just always run the extra ownership query upfront as a pattern to keep things as simple as possible, which is what you really want to optimise for over performance until you're running at significant scale.

Related

Composite unique constraint on business fields with Axon

We leverage AxonIQ Framework in our system. We've faced a problem implementing composite uniq constraint based on aggregate business fields.
Consider following Aggregate:
#Aggregate
public class PersonnelCardAggregate {
#AggregateIdentifier
private UUID personnelCardId;
private String personnelNumber;
private Boolean archived;
}
We want to avoid personnelNumber duplicates in the scope of NOT-archived (archived == false) records. At the same time personnelNumber duplicates may exist in the scope of archived records.
Query Side check seems NOT to be an option. Taking into account Eventual Consistency nature of our system, more than one creation request with the same personnelNumber may exist at the same time and the Query Side may be behind.
What the solution would be?
What you're asking is an issue that can occur as soon as you start implementing an application along the CQRS paradigm and DDD modeling techniques.
The PersonnelCardAggregate in your scenario maintains the consistency boundary of a single "Personnel Card". You are however looking to expand this scope to achieve a uniqueness constraints among all Personnel Cards in your system.
I feel that this blog explains the problem of "Set Based Consistency Validation" you are encountering quite nicely.
I will not iterate his entire blog, but he sums it up as having four options to resolving the problem:
Introduce locking, transactions and database constraints for your Personnel Card
Use a hybrid locking field prior to issuing your command
Really on the eventually consistent Query Model
Re-examine the domain model
To be fair, option 1 wont do if your using the Event-Driven approach to updating your Command and Query Model.
Option 3 has been pushed back by you in the original question.
Option 4 is something I cannot deduce for you given that I am not a domain expert, but I am guessing that the PersonnelCardAggregate does not belong to a larger encapsulating Aggregate Root. Maybe the business constraint you've stated, thus the option to reuse personalNumbers, could be dropped or adjusted? Like I said though, I cannot state this as a factual answer for you, as I am not the domain expert.
That leaves option 2, which in my eyes would be the most pragmatic approach too.
I feel this would require a combination of a cache at your command dispatching side to deal with quick successions of commands to resolve the eventual consistency issue. To capture the occurs that an update still comes through accidentally, I'd introduce some form of Event Handler that (1) knows the entire set of "PersonnelCards" from a personalNumber/archived point of view and (2) can react on a faulty introduction by dispatching a compensating action.
You'd thus introduce some business logic on the event handling side of your application, which I'd strongly recommend to segregate from the application part which updates your query models (as the use cases are entirely different).
Concluding though, this is a difficult topic with several ways around it.
It's not so much an Axon specific problem by the way, but more an occurrence of modeling your application through DDD and CQRS.

SQL Database Design - Flag or New Table?

Some of the Users in my database will also be Practitioners.
This could be represented by either:
an is_practitioner flag in the User table
a separate Practitioner table with a user_id column
It isn't clear to me which approach is better.
Advantages of flag:
fewer tables
only one id per user (hence no possibility of confusion, and also no confusion in which id to use in other tables)
flexibility (I don't have to decide whether fields are Practitioner-only or not)
possible speed advantage for finding User-level information for a practitioner (e.g. e-mail address)
Advantages of new table:
no nulls in the User table
clearer as to what information pertains to practitioners only
speed advantage for finding practitioners
In my case specifically, at the moment, practitioner-related information is generally one-to-many (such as the locations they can work in, or the shifts they can work, etc). I would not be at all surprised if it turns I need to store simple attributes for practitioners (i.e., one-to-one).
Questions
Are there any other considerations?
Is either approach superior?
You might want to consider the fact that, someone who is a practitioner today, is something else tomorrow. (And, by that I don't mean, not being a practitioner). Say, a consultant, an author or whatever are the variants in your subject domain, and you might want to keep track of his latest status in the Users table. So it might make sense to have a ProfType field, (Type of Professional practice) or equivalent. This way, you have all the advantages of having a flag, you could keep it as a string field and leave it as a blank string, or fill it with other Prof.Type codes as your requirements grow.
You mention, having a new table, has the advantage for finding practitioners. No, you are better off with a WHERE clause on the users table for that.
Your last paragraph(one-to-many), however, may tilt the whole choice in favour of a separate table. You might also want to consider, likely number of records, likely growth, criticality of complicated queries etc.
I tried to draw two scenarios, with some notes inside the image. It's really only a draft just to help you to "see" the various entities. May be you already done something like it: in this case do not consider my answer please. As Whirl stated in his last paragraph, you should consider other things too.
Personally I would go for a separate table - as long as you can already identify some extra data that make sense only for a Practitioner (e.g.: full professional title, University, Hospital or any other Entity the Practitioner is associated with).
So in case in the future you discover more data that make sense only for the Practitioner and/or identify another distinct "subtype" of User (e.g. Intern) you can just add fields to the Practitioner subtable, or a new Table for the Intern.
It might be advantageous to use a User Type field as suggested by #Whirl Mind above.
I think that this is just one example of having to identify different type of Objects in your DB, and for that I refer to one of my previous answers here: Designing SQL database to represent OO class hierarchy

Is it RESTful do DELETE collections?

Some say it's "often not desirable" for a REST server to allow the DELETEion of the entire collection of entities.
DELETE http://www.example.com/customers
Is this a real rule for achieving RESTful nirvana?
And what about sub-collections, defined by query parameters?
DELETE http://www.example.com/customers?gender=m
The answer to this depends more on the requirements and risks of your application than on the inherent RESTfulness of either construct.
It's "not often desirable" to delete an entire collection if you imagine the collection as something with enduring importance like a customer list. It doesn't break with some essential REST wisdom.
If the collection contains information that a user should be able to delete, and potentially a lot of such information, DELETE of the entire collection can be the nicest REST-ish way to go, rather than run a lot of individual DELETEs.
Deleting based on criteria (e.g. the query parameter) is so essential to some applications that if the REST police declared it Officially UnRESTful I would continue to do it without shame.
(They actually say "not often desirable," which one might interpret slightly differently than "often not desirable.")
Yes, it's RESTful. If you have a valid use case, it's fine to do it. Your second scenario (deleting with a query) is frequently useful, and can be an easy way to reduce the number of HTTP requests the client has to make.
Edit: as #peeskillet says, do consider if you actually want to delete something, versus change some flag on the record (e.g. "active").

Meteor method vs. deny/allow rules

In Meteor, when should I prefer a method over a deny rule?
It seems to me that allow/deny rules should be favoured, as their goal is more explicit, and one knows where to look for them.
However, in the Discover Meteor book, preventing duplicate insertions (“duplicate” being defined as adding a document whose url property is already defined in some other document of the same collection) is said to have to be defined through a method (and left as an exercise to the reader, chapter 8.3).
I think I am able to implement this check in a way that I find much clearer:
Posts.deny({
update: function(userId, post, fieldNames, modifier) {
return Posts.findOne({ url: modifier.$set.url, _id: { $ne: post._id } });
}
});
(N.B. if you know the example, yes, I voluntarily left out the “only a subset of the attributes is modified” check from the question to be more specific.)
I understand that there are other update operators than $set in Mongo, but they look typed and I don't feel like leaving a security hole open.
So: are there any flaws in my deny rule? Independently, should I favour a method? What would I gain from it? What would I lose?
Normally I try to avoid subjective answers, but this is a really important debate. First I'd recommend reading Meteor Methods vs Client-Side Operations from the Discover Meteor blog. Note that at Edthena we exclusively use methods for reasons which should become evident.
Methods
pro
Methods can correctly enforce schema and validation rules of arbitrary complexity without the need of an outside library. Side note - check is an excellent tool for validating the structure of your inputs.
Each method is a single source of truth in your application. If you create a 'posts.insert' method, you can easily ensure it is the only way in your app to insert posts.
con
Methods require an imperative style, and they tend to be verbose in relation to the number of validations required for an operation.
Client-side Operations
pro
allow/deny has a simple declarative style.
con
Validating schema and permissions on an update operation is infinitely hard. If you need to enforce a schema you'll need to use an outside library like collection2. This reason alone should give you pause.
Modifications can be spread all over your application. Therefore, it may be tricky to identify why a particular database operation happened.
Summary
In my opinion, allow/deny is more aesthetically pleasing, however it's fundamental weakness is in enforcing permissions (particularly on updates). I would recommend client-side operations in cases where:
Your codebase is relatively small - so it's easy to grep for all instances where a particular modifier occurs.
You don't have many developers - so you don't need to all agree that there is one and only one way to insert into X collection.
You have simple permission rules - e.g. only the owner of a document can modify any aspect of it.
In my opinion, using client-side operations is a reasonable choice when building an MVP, but I'd switch to methods for all other situations.
update 2/22/15
Sashko Stubailo created a proposal to replace allow/deny with insert/update/remove methods.
update 6/1/16
The meteor guide takes the position that allow/deny should always be avoided.

Can I use claims to secure EF fields using PostSharp?

It it possible to use claims based permissions to secure EF fields using post sharp. We have a multi-tenanted app that we are moving to claims and also have issues of who can read/write to what fields. I saw this but it seems role based http://www.postsharp.net/aspects/examples/security.
As far as I can see it would just be a case of rewriting the ISecurable part.
We were hoping to be able to decorate a field with a permission and silently ignore any write to if if the user did not have permission. We were also hopping that if they read it we could swap in another value e.g. Read salary and get back 0 if you don't have a claim ReadSalary.
Are these standard sort of things to do I've never done any serious AOP. So just wanted a quick confirmation before I mention this as an option.
Yes, it is possible to use PostSharp in this case and it should be pretty easy to convert given example from RBAC to claims based.
One thing that has to be considered is performance. When decorated fields are accessed frequently during processing an use-case (e.g. are read inside a loop) then a lot of time is wasted in redundant security checks. Decorating a method that represent an end-user's use-case would be more appropriate.
I would be afraid to silently swapping values of fields when user has insufficient permission. It could lead to some very surprising results when an algorithm is fed by artificial not-expected data.