thread safe increment value in db - postgresql

I have come across a problem, not sure how to implement it with DB. I have go lang on the application side.
I have product table with column assigned as last_port_used. I need to assign ports to services when someone hits an api. It need to increment the last_port_id by 1 against its product name.
one possible solution would have been to use redis server and sync this value over there. Since we dont have redis. I wanted to achieve the same by psql.
I read more about locks and i think i need ACCESS EXCLUSIVE lock. is this the right way to do it?
product
id
name
start_port //11000
end_port//11999
last_port_used// 11023
How to handle it concurrently properly?

You could do simply:
UPDATE products SET last_port_used = last_port_used+1
WHERE id=...
AND last_port_used < end_port
RETURNING *
This will perform the update in a thread-safe manner, and only if a port number is available (last_port_used < end_port) and return the assigned port.
If you need to lock the row, you can also use SELECT FOR UPDATE.

Related

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?

Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?
MSH|^~&|sss|xxx|INSTANCE2|KKLIU 0063/2021|20190905162034||ADT^A28^ADT_A05|Zx20190905162034|P|2.4|||NE|NE|||||
whenever message enters it needs to be validated whether duplicate of control id Zx20190905162034 is already processed or not?
Mirth will not do this for you, but you can write your own JavaScript transformer to check a database or your own set of previously encountered control ids.
Your JavaScript can make use of any appropriate Java classes.
The database check (you can implement this using code template) is the easier way out. You might want to designate the column storing MSH:10 values as a primary key or define an index on it. Queries against unique entries would be faster. Other alternatives include periodically redeploying the Channel while reading all MSH:10 values already in the database and placing them in a global map variable or maintained in an API that you can make a GET request to when processing every message. Any of the options depends on the number of records we are speaking about.

Why Spring Data doesn't support returning entity for modifying queries?

When implementing a system which creates tasks that need to be resolved by some workers, my idea would be to create a table which would have some task definition along with a status, e.g. for document review we'd have something like reviewId, documentId, reviewerId, reviewTime.
When documents are uploaded to the system we'd just store the documentId along with a generated reviewId and leave the reviewerId and reviewTime empty. When next reviewer comes along and starts the review we'd just set his id and current time to mark the job as "in progress" (I deliberately skip the case where the reviewer takes a long time, or dies during the review).
When implementing such a use case in e.g. PostgreSQL we could use the UPDATE review SET reviewerId = :reviewerId, reviewTime: reviewTime WHERE reviewId = (SELECT reviewId from review WHERE reviewId is null AND reviewTime is null FOR UPDATE SKIP LOCKED LIMIT 1) RETURNING reviewId, documentId, reviewerId, reviewTime (so basically update the first non-taken row, using SKIP LOCKED to skip any already in-processing rows).
But when moving from native solution to JDBC and beyond, I'm having troubles implementing this:
Spring Data JPA and Spring Data JDBC don't allow the #Modifying query to return anything else than void/boolean/int and force us to perform 2 queries in a single transaction - one for the first pending row, and second one with the update
one alternative would be to use a stored procedure but I really hate the idea of storing such logic so away from the code
other alternative would be to use a persistent queue and skip the database all along but this introduced additional infrastructure components that need to be maintained and learned. Any suggestions are welcome though.
Am I missing something? Is it possible to have it all or do we have to settle for multiple queries or stored procedures?
Why Spring Data doesn't support returning entity for modifying queries?
Because it seems like a rather special thing to do and Spring Data JDBC tries to focus on the essential stuff.
Is it possible to have it all or do we have to settle for multiple queries or stored procedures?
It is certainly possible to do this.
You can implement a custom method using an injected JdbcTemplate.

Concurrent create in OrientDB

Due to some vague reasons we are using replicated orient-db for our application.
It looks likely that we will have cases when a single record could be created twice. Here is how it happens:
we have entities user_document, they have ID of user and ID of document - both users and documents are managed in another application and stored in another database;
this another application on creating new document sends broadcast event (via rabbit-mq topic);
several instances of our application receive this message and create another user_document with the same pair of user_id and document_id.
If I understand correct, we should use UNIQUE index on the pair of these ids and rely upon distributed transactions.
However due to some reasons (we have another team writitng layer between application and database) we probably could not use UNIQUE though it may sound stupid :)
What are our chances then?
Could we, for example, allow all instances to create redundant records and immediately after creation select by user_id and document_id and if more than one were found, delete ones with lexicografically higher own id?
Sure you can do in that way.
You can try to use something like
DELETE FROM (SELECT FROM user_document where user_id=? and document_id=? skip 1)
However, take a notice that without creation of index this approach may consume some additional resources on server, and you might got a significant slow down if user_document have big amount of records.

Strategy to generate auto increment unique number in CQ

I have a requirement to create CQ pages programmatically. But the challenge is that the page name/uri should be autogenerated combination of a string + unique number (eg. PT2000, PT2001).
Can someone tell me a way way to generate an autoincrement-id/constant in CQ in a way that the id's are unique even with multiple concurrent request?
Use a service that provides you with the ID and that manages the counter inside a volatile instance variable to make sure that state changes by one thread are immediately communicated to all other threads.
This should do the trick as long as your can guarantee that your implementation runs on a single author node. In a cluster scenario you additionally have to care about executing it only on one node.
i'd suggest creating a service that manages its counters somewhere in the repository, and also acts as a jcr EventListener. the service should listen for NODE_ADDED events on parent nodes of type cq:Page, and once onEvent is called, it can assigned the unique id at that point. you'd want to use synchronization obviously so that overlapping calls to onEvent() won't use up the same id.
You can use a GUID, Graphic User ID, the ID generated has a great probablity of uniqueness.
See wiki reference http://en.wikipedia.org/wiki/Globally_unique_identifier
and to create GUID:
Create a GUID in Java
This will ease you effort to verify the number is unique so just generate the ID and create the pages with that ID.
Doesn't AEM automatically append numbers to same name pages?
If it doesn't, then presumably this would fail, at which point you start over with the next number. Best guess should be enough in this case.

How to get list of aggregates using JOliviers's CommonDomain and EventStore?

The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.