A way to identify a record in Firebird without RDB$DB_KEY or a key - firebird

Is there a way to identify any record in any table without using RDB$DB_KEY or a table's key?
Unfortunately RDB$DB_KEY can only be guaranteed for the current transaction and might be different outside of it and without another key in the table one would not be able to uniquely identify a record if a record is an exact duplicate of another.

Other than RDB$DB_KEY, a primary key, or a unique key, there is nothing else to uniquely identify a row.
It is possible to extend the lifetime of RDB$DB_KEY to the lifetime of a connection using DPB property isc_dpb_dbkey_scope. However, using that is a bad idea: it will start an internal transaction for the lifetime of your connection which will prevent garbage collection of old row versions. This can seriously affect the performance of your application.

Related

What is better let database throw an execption or throw custom execption

I am developing an authentication system using express, So I have a unique email field in the database
should I check the email first and if it exists throw a new custom error Or let the database throw the error?
I want to know what is better
Consumers of your API don't and shouldn't know what kind of database you use.
The error that makes it back to them should encapsulate all of it and specifically tell them what is wrong in some standard format with a good HTTP status code.
Database-specific errors leaking to the user should usually be considered a bug.
Both.
You should write code to check that the email exists before you attempt the insert.
But if that check finds no email, you might still get an error, because of a race condition. For example, in the brief moment between checking for the email and then proceeding to insert the row, some other concurrent session may insert its own row using that email. So your insert will get a duplicate key error in that case, even though you had checked and found the email not present.
Then why bother checking? Because if you use a table with an auto_increment primary key, a failed insert generates and then discards an auto-increment value.
This might seem like a rare and insignificant amount of waste. Also, we don't care that auto-increment id's are consecutive.
But I did help fix an application for a customer where they had a problem that new users were trying 1500 times to create unique accounts before succeeding. So they were "losing" thousands of auto-increment id's for every account. After a couple of months, they exhausted the range of the signed integer.
The fix I recommended was to first check that the email doesn't exist, to avoid attempting the insert if the email is found. But you still have to handle the race condition just in case.

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?
MSH|^~&|sss|xxx|INSTANCE2|KKLIU 0063/2021|20190905162034||ADT^A28^ADT_A05|Zx20190905162034|P|2.4|||NE|NE|||||
whenever message enters it needs to be validated whether duplicate of control id Zx20190905162034 is already processed or not?
Mirth will not do this for you, but you can write your own JavaScript transformer to check a database or your own set of previously encountered control ids.
Your JavaScript can make use of any appropriate Java classes.
The database check (you can implement this using code template) is the easier way out. You might want to designate the column storing MSH:10 values as a primary key or define an index on it. Queries against unique entries would be faster. Other alternatives include periodically redeploying the Channel while reading all MSH:10 values already in the database and placing them in a global map variable or maintained in an API that you can make a GET request to when processing every message. Any of the options depends on the number of records we are speaking about.

Allow negative primary keys in SailsJS

Is there any way to allow primary keys to be negative integers in Sails?
I ran accross the following error when testing some older software;
{
"code":"E_INVALID_VALUES_TO_SET",
"details":"Could not use specified `org`. Expecting an id representing the associated record, or `null` to indicate there will be no associated record. But the specified value is not a valid `org`. Cannot use a negative number (-1) as a primary key value.",
"message":"The server could not fulfill this request (`PATCH /user/1402`) due to a problem with the parameters that were sent. See the `details` for more info. **The following additional tip will not be shown in production**: Tip: Check your client-side code to make sure that the request data it sends matches the expectations of the corresponding attribues in your model. Also check that your client-side code sends data for every required attribute."}
I've checked the Sails documentation and can't find any place which mentions that negative primary keys are not allowed.
I've also checked the schema definitions for both tables, and neither speciefies the relevant field as unsigned.
Is there any workaround other than changing the relevant row to some different id and updating every other row which references it?
Here is a workaround. Maybe you can change your primary key to a string.
https://sailsjs.com/documentation/concepts/models-and-orm/model-settings

Is it legitimate to insert UUIDs into Postgres that have been generated by a client application?

The normal MO for creating items in a database is to let the database control the generation of the primary key (id). That's usually true whether you're using auto-incremented integer ids or UUIDs.
I'm building a clientside app (Angular but the tech is irrelevant) that I want to be able to build offline behaviour into. In order to allow allow offline object creation (and association) I need the the client appplication to generate primary keys for new objects. This is both to allow for associations with other objects created offline and also to allow for indempotence (making sure I don't accidentally save the same object to the server twice due to a network issue).
The challenge though is what happens when that object gets sent to the server. Do you use a temporary clientside ID which you then replace with the ID that the server subsequently generates or you use some sort of ID translation layer between the client and the server - this is what Trello did when building their offline functionality.
However, it occurred to me that there may be a third way. I'm using UUIDs for all tables on the back end. And so this made me realise that I could in theory insert a UUID into the back end that was generated on the front end. The whole point of UUIDs is that they're universally unique so the front end doesn't need to know the server state to generate one. In the unlikely event that they do collide then the uniqueness criteria on the server would prevent a duplicate.
Is this a legitimate approach? The risk seems to be 1. Collisions and 2. any form of security that I haven't anticipated. Collisons seem to be taken care of by the way that UUIDs are generated but I can't tell if there are risks in allowing a client to choose the ID of an inserted object.
However, it occurred to me that there may be a third way. I'm using UUIDs for all tables on the back end. And so this made me realise that I could in theory insert a UUID into the back end that was generated on the front end. The whole point of UUIDs is that they're universally unique so the front end doesn't need to know the server state to generate one. In the unlikely event that they do collide then the uniqueness criteria on the server would prevent a duplicate.
Yes, this is fine. Postgres even has a UUID type.
Set the default ID to be a server-generated UUID if the client does not send one.
Collisions.
UUIDs are designed to not collide.
Any form of security that I haven't anticipated.
Avoid UUIDv1 because...
This involves the MAC address of the computer and a time stamp. Note that UUIDs of this kind reveal the identity of the computer that created the identifier and the time at which it did so, which might make it unsuitable for certain security-sensitive applications.
You can instead use uuid_generate_v1mc which obscures the MAC address.
Avoid UUIDv3 because it uses MD5. Use UUIDv5 instead.
UUIDv4 is simplest, it's a 122 bit random number, and built into Postgres (the others are in the commonly available uuid-osp extension). However, it depends on the strength of the random number generator of each client. But even a bad UUIDv4 generator is better than incrementing an integer.

Concurrent create in OrientDB

Due to some vague reasons we are using replicated orient-db for our application.
It looks likely that we will have cases when a single record could be created twice. Here is how it happens:
we have entities user_document, they have ID of user and ID of document - both users and documents are managed in another application and stored in another database;
this another application on creating new document sends broadcast event (via rabbit-mq topic);
several instances of our application receive this message and create another user_document with the same pair of user_id and document_id.
If I understand correct, we should use UNIQUE index on the pair of these ids and rely upon distributed transactions.
However due to some reasons (we have another team writitng layer between application and database) we probably could not use UNIQUE though it may sound stupid :)
What are our chances then?
Could we, for example, allow all instances to create redundant records and immediately after creation select by user_id and document_id and if more than one were found, delete ones with lexicografically higher own id?
Sure you can do in that way.
You can try to use something like
DELETE FROM (SELECT FROM user_document where user_id=? and document_id=? skip 1)
However, take a notice that without creation of index this approach may consume some additional resources on server, and you might got a significant slow down if user_document have big amount of records.