What is better let database throw an execption or throw custom execption - postgresql

I am developing an authentication system using express, So I have a unique email field in the database
should I check the email first and if it exists throw a new custom error Or let the database throw the error?
I want to know what is better

Consumers of your API don't and shouldn't know what kind of database you use.
The error that makes it back to them should encapsulate all of it and specifically tell them what is wrong in some standard format with a good HTTP status code.
Database-specific errors leaking to the user should usually be considered a bug.

Both.
You should write code to check that the email exists before you attempt the insert.
But if that check finds no email, you might still get an error, because of a race condition. For example, in the brief moment between checking for the email and then proceeding to insert the row, some other concurrent session may insert its own row using that email. So your insert will get a duplicate key error in that case, even though you had checked and found the email not present.
Then why bother checking? Because if you use a table with an auto_increment primary key, a failed insert generates and then discards an auto-increment value.
This might seem like a rare and insignificant amount of waste. Also, we don't care that auto-increment id's are consecutive.
But I did help fix an application for a customer where they had a problem that new users were trying 1500 times to create unique accounts before succeeding. So they were "losing" thousands of auto-increment id's for every account. After a couple of months, they exhausted the range of the signed integer.
The fix I recommended was to first check that the email doesn't exist, to avoid attempting the insert if the email is found. But you still have to handle the race condition just in case.

Related

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?
MSH|^~&|sss|xxx|INSTANCE2|KKLIU 0063/2021|20190905162034||ADT^A28^ADT_A05|Zx20190905162034|P|2.4|||NE|NE|||||
whenever message enters it needs to be validated whether duplicate of control id Zx20190905162034 is already processed or not?
Mirth will not do this for you, but you can write your own JavaScript transformer to check a database or your own set of previously encountered control ids.
Your JavaScript can make use of any appropriate Java classes.
The database check (you can implement this using code template) is the easier way out. You might want to designate the column storing MSH:10 values as a primary key or define an index on it. Queries against unique entries would be faster. Other alternatives include periodically redeploying the Channel while reading all MSH:10 values already in the database and placing them in a global map variable or maintained in an API that you can make a GET request to when processing every message. Any of the options depends on the number of records we are speaking about.

Allow negative primary keys in SailsJS

Is there any way to allow primary keys to be negative integers in Sails?
I ran accross the following error when testing some older software;
{
"code":"E_INVALID_VALUES_TO_SET",
"details":"Could not use specified `org`. Expecting an id representing the associated record, or `null` to indicate there will be no associated record. But the specified value is not a valid `org`. Cannot use a negative number (-1) as a primary key value.",
"message":"The server could not fulfill this request (`PATCH /user/1402`) due to a problem with the parameters that were sent. See the `details` for more info. **The following additional tip will not be shown in production**: Tip: Check your client-side code to make sure that the request data it sends matches the expectations of the corresponding attribues in your model. Also check that your client-side code sends data for every required attribute."}
I've checked the Sails documentation and can't find any place which mentions that negative primary keys are not allowed.
I've also checked the schema definitions for both tables, and neither speciefies the relevant field as unsigned.
Is there any workaround other than changing the relevant row to some different id and updating every other row which references it?
Here is a workaround. Maybe you can change your primary key to a string.
https://sailsjs.com/documentation/concepts/models-and-orm/model-settings

Where to handle errors on database in mongoDB?

When user is registering on website, e-mail needs to be provided which is unique. I've made unique index on schema's email attribute, so if I try to save the document in database, error with code 11000 will be returned. My question is, regarding to business layer and data layer, should I just pass the document to database and catch/check error codes which it returns or should I check if the user with that e-mail exists before? I've being told that data integrity should be checked before passing it to the database by the business layer, but I don't see the reason why should I do that since I believe that mongo would be much faster raising the exception itself since it has that index provided. The only disadvantage I see in error code checking is that error codes might change (but I could abstract them) and the syntax might be changed.
There is the practical matter of speed and the fragility of "check-then-set" systems. If you try and check if an email exists before you write the document keyed on email, there is a chance that between the time you check and the time you right the conditions of the unique index are met and your write fails anyhow. This is a classic race condition. Further, it takes 2 queries to do check-then-set but only 1 query to do the insert and handle the failure. In my application I am having success with just letting the failure occur and reacting to the result.
As #JamesWahlin says, it is the difference between dong this all in one or causing mixed results (along with the index check) from potential race conditions by adding the extra client read.
Definitely rely on the response of only insert from MongoDB here.

Do Salesforce VF email templates require related object to be persisted?

When a new lead comes in, I want to use a before trigger and a Visualforce email template that contains lead field values to send an email using the SingleEmailMessage class. The email is being generated, but all of the lead fields are null even though (known via System.Debug) they do have values going into the call.
Since I'm passing the still-unsaved lead Id via the mail.setWhatId(lead.Id) method, I'm beginning to think that the mail class is using the Id value and trying to do a database look-up rather than as a reference to the still unsaved lead in memory.
Does anyone know if that's the case? My class works flawlessly when the lead already exists.
If it is the case that the Apex mail class does a DB read, any pattern suggestions for the case where one needs to send and email and update a lead field value before the lead is saved? I can't use the Workflow email notification because the email is being addressed to customers, and there's some additional Apex code that sorts out what address to fetch from existing Account records based on some Lead fields--hence I think the need for using VF email templates in the first place.
setWhatId (and pretty much any method that takes an ID value as an argument) definitely does expect the row to be persisted already. To get around this, you should be able to just do your field update in the before trigger, and add an after trigger to send the email.

How do I pretend duplicate values in my read database with CQRS

Say that I have a User table in my ReadDatabase (use SQL Server). In a regulare read/write database I can put like a index on the table to make sure that 2 users aren't addedd to the table with the same emailadress.
So if I try to add a user with a emailadress that already exist in my table for a diffrent user, the sql server will throw an exception back.
In Cqrs I can't do that since if I decouple the write to my readdatabas from the domain model, by puting it on an asyncronus queue I wont get the exception thrown back to me, and I will return "OK" to the UI and the user will think that he is added to the database, when infact he will never be added to the read database.
I can do a search in the read database checking if there is a user already in my database with the emailadress, and if there is one, then thru an exception back to the UI. But if they press the save button the same time, I will do 2 checks to the database and see that there isn't any user in the database with the emailadress, I send back that it's okay. Put it on my queue and later it will fail (by hitting the unique identifier).
Am I suppose to load all users from my EventSource (it's a SQL Server) and then do the check on that collection, to see if I have a User that already has this emailadress. That sounds a bit crazy too me...
How have you people solved it?
The way I can see is to not using an asyncronized queue, but use a syncronized one but that will affect perfomance really bad, specially when you have many "read storages" to write to...
Need some help here...
Searching for CQRS Set Based Validation will give you solutions to this issue.
Greg Young posted about the business impact of embracing eventual consistency http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Jérémie Chassaing posted about discovering missing aggregate roots in the domain http://thinkbeforecoding.com/post/2009/10/28/Uniqueness-validation-in-CQRS-Architecture
Related stack overflow questions:
How to handle set based consistency validation in CQRS?
CQRS Validation & uniqueness