Bad practice to willfully allow errors using updateData? - google-cloud-firestore

I only want to update a document if it exists and I don't want to use a transaction because it's not offline-capable. Therefore, I use updateData(). However, this task is common in the UX and is likely to fail (the document won't exist) half of the time. I shudder at the idea of allowing errors that I know will happen but I see no other way to preserve offline capability and update documents only when they exist. Is this frowned upon by Firestore?

Firestore doesn't really care if your update fails when a document doesn't exist.

Related

Flutter firestore partial updates with type safety

I'd love to be able to make type-safe partial updates to firestore documents in flutter, but can't find a solution anywhere to do it.
We use freezed for our models, and then use withConverter in the firestore SDK to easily translate between firestore and our models... however this requires updating every field in the document for each update, causing us to run into problems with concurrent updates to documents. If two users update different fields of the same document concurrently, last-in wins... and offline mode exacerbates the issue. So I'd prefer to update only the fields that have changed.
Option 1
One solution is to forget about type safety and just use the update method.
Option 2
Another solution is to wrap these updates in transactions, but then we can't take advantage of offline mode.
Option 3
A third solution might be to write a library that apparently doesn't exist, which creates a firestore update map by comparing two objects.
Does anyone know of something better?

mongodb change stream, operation update, how to solve get previous value problem

I know that this feature is not implemented by mongodb. I am thinking what can be best way to achieve that.
Using caching service ? The approach will work but there is one problem, when the query You watch on is too big, like whole collection, You will never have the first before value, because You start caching only when first change appear on watch.
Service started watching.
Received object id 1, no cache for previous change, caching value.
Received object id 1, cache for previous change, can do comparison, caching value
I see another problem here, if I have 2 watchers which could potentially receive information about the same object, this will cause sync problems, as one process may update cache and second will already receive wrong data, hm. I mean the second process could be in a situation that cached previous value is already the same as the one in mongodb change stream.
I was thinking as well about mongodb replicas, but not sure if the problem can be solved with it.
Best,
Igor

Check for object ownership with Prisma

I'm new to working with Prisma. One aspect that is unclear to me is the right way to check if a user has permission on an object. Let's assume we have Book and Author models. Every book has an author (one-to-many). Only authors have permission to delete books.
An easy way to enforce this would be this:
prismaClient.book.deleteMany({
id: bookId, <-- id is known
author: {
id: userId <-- id is known
}
})
But this way it's very hard to show an UnauthorizedError to the user. Instead, the response will be a 500 status code since we can't know the exact reason why the query failed.
The other approach would be to query the book first and check the author of the book instance, which would result in one more query.
Is there a best practice for this in Prisma?
Assuming you are using PostgreSQL, the best approach would be to use row-level-security(RLS) - but unfortunately, it is not yet officially supported by Prisma.
There is a discussion about this subject here
https://github.com/prisma/prisma/issues/5128
As for the current situation, to my opinion, it is better to use an additional query and provide the users with informative feedback rather than using the other method you suggested without knowing why it was not deleted.
Eventually, it is up to you to decide based on your use case - whether or not it is important for you to know the reason for failure.
So this question is more generic than prisma - it is also true when running updates/deletes in raw SQL.
When you have extra where clauses to check for ownership, it's difficult to infer which of the clause(s) caused that if the update does not happen, without further queries.
You can achieve this with row level security in postgres, but even that does not come out the box and involves custom configuration to throw specific exceptions when rows are not found due to row level security rules. See this answer for more detail.
I tend to think that doing customised stuff like this is rarely worth the tradeoff, unless you need specialised UX for an uncommon circumstance.
What I would suggest instead in this case is to keep it simple and just use extra queries to check for ownership, but optimise the UX optimistically for the case where the user does own the entity and keep that common and legitimate usecase to a single query.
That is, catch the exception from primsa (or the fact that the update returns 0 rows, or whatever it is in different cases), and only then run a specific select for ownership, to check if that was the reason the update failed.
This is a nice tradeoff because it keeps things simple, and only runs extra queries in the (usually) far less common failure case.
Even having said all that, the broader caveat as always is that probably the extra queries simply won't matter regardless! It's, in 99% of cases, probably best to just always run the extra ownership query upfront as a pattern to keep things as simple as possible, which is what you really want to optimise for over performance until you're running at significant scale.

Firestore, why use "update" instead of "set merge"?

set with merge will update fields in the document or create it if it doesn't exists
update will update fields but will fail if the document doesn't exist
Wouldn't it be much easier to always use set merges?
Are the prices slightly different?
Between set merge and update there is difference in the use case.
You may find detailed information regarding this on this post.
Regarding the pricing, as stated here:
Each set or update operation counts as a single write and is being billed according to the region.
=========================================================================
EDIT:
The choice of which operation to use is greatly depending on the use case, as if you use "set merge" for a batch update, your request will successfully update all existing documents but also create dummy documents for non existent ids, which sometimes is not what you want.
After investigating a bit further, we could add another difference:
set merge will always override the data with the data you pass, while
update is specifically designed to give you the possibility to perform a partial update of a document without the possibility of creating incomplete documents that your code isn't otherwise prepared to handle. Please check this answer, as well as this scenario.
The difference is that .set(data, {merge:true}) will update the document if it exists, or create the document if it doesn't.
.update() fails if the document doesn't exist.
But why does .update() still exist? Well, probably for backward compatibility. I believe .set() with merge:true has been introduced at a later date than .update(). As you have pointed out, set/merge is more versatile. I use it instead of .update() and instead of .add()

Is it RESTful do DELETE collections?

Some say it's "often not desirable" for a REST server to allow the DELETEion of the entire collection of entities.
DELETE http://www.example.com/customers
Is this a real rule for achieving RESTful nirvana?
And what about sub-collections, defined by query parameters?
DELETE http://www.example.com/customers?gender=m
The answer to this depends more on the requirements and risks of your application than on the inherent RESTfulness of either construct.
It's "not often desirable" to delete an entire collection if you imagine the collection as something with enduring importance like a customer list. It doesn't break with some essential REST wisdom.
If the collection contains information that a user should be able to delete, and potentially a lot of such information, DELETE of the entire collection can be the nicest REST-ish way to go, rather than run a lot of individual DELETEs.
Deleting based on criteria (e.g. the query parameter) is so essential to some applications that if the REST police declared it Officially UnRESTful I would continue to do it without shame.
(They actually say "not often desirable," which one might interpret slightly differently than "often not desirable.")
Yes, it's RESTful. If you have a valid use case, it's fine to do it. Your second scenario (deleting with a query) is frequently useful, and can be an easy way to reduce the number of HTTP requests the client has to make.
Edit: as #peeskillet says, do consider if you actually want to delete something, versus change some flag on the record (e.g. "active").