I have a requirement to create CQ pages programmatically. But the challenge is that the page name/uri should be autogenerated combination of a string + unique number (eg. PT2000, PT2001).
Can someone tell me a way way to generate an autoincrement-id/constant in CQ in a way that the id's are unique even with multiple concurrent request?
Use a service that provides you with the ID and that manages the counter inside a volatile instance variable to make sure that state changes by one thread are immediately communicated to all other threads.
This should do the trick as long as your can guarantee that your implementation runs on a single author node. In a cluster scenario you additionally have to care about executing it only on one node.
i'd suggest creating a service that manages its counters somewhere in the repository, and also acts as a jcr EventListener. the service should listen for NODE_ADDED events on parent nodes of type cq:Page, and once onEvent is called, it can assigned the unique id at that point. you'd want to use synchronization obviously so that overlapping calls to onEvent() won't use up the same id.
You can use a GUID, Graphic User ID, the ID generated has a great probablity of uniqueness.
See wiki reference http://en.wikipedia.org/wiki/Globally_unique_identifier
and to create GUID:
Create a GUID in Java
This will ease you effort to verify the number is unique so just generate the ID and create the pages with that ID.
Doesn't AEM automatically append numbers to same name pages?
If it doesn't, then presumably this would fail, at which point you start over with the next number. Best guess should be enough in this case.
Related
Is there anyway to check duplicate the message control id (MSH:10) in MSH segment using Mirth connect?
MSH|^~&|sss|xxx|INSTANCE2|KKLIU 0063/2021|20190905162034||ADT^A28^ADT_A05|Zx20190905162034|P|2.4|||NE|NE|||||
whenever message enters it needs to be validated whether duplicate of control id Zx20190905162034 is already processed or not?
Mirth will not do this for you, but you can write your own JavaScript transformer to check a database or your own set of previously encountered control ids.
Your JavaScript can make use of any appropriate Java classes.
The database check (you can implement this using code template) is the easier way out. You might want to designate the column storing MSH:10 values as a primary key or define an index on it. Queries against unique entries would be faster. Other alternatives include periodically redeploying the Channel while reading all MSH:10 values already in the database and placing them in a global map variable or maintained in an API that you can make a GET request to when processing every message. Any of the options depends on the number of records we are speaking about.
This question seemed fool at first sight for me, but then I realized that I don't have a proper answer yet, and interestingly also didn't find good explanation about it in my searches.
I'm new to Domain Driven Design concepts, so, even if the question is basic, feel free to add any considerations to it.
I'm designing in Rest API to configure Server Instances, and I came up with a Aggregate called Instance that contains a List of Configurations, only one specific Configuration will be active at a given time.
To add a Configuration, one would call an endpoint POST /instances/{id}/configurations with the body on the desired configuration. In response, if all okay, it would receive a HTTP 204 with a Header Location containing the new Configuration ID.
I'm planning to have only one Controller, InstanceController, that would call InstanceService that would manipulate the Instance Aggregate and then store to the Repo.
Since the ID's are generated by the repository, If I call Instance.addConfiguration and then InstanceRepository.store, how would I get the ID of the newly created configuration? I mean, it's a List, so It's not trivial as calling Instance.configuration.identity
A option would implement a method in Instance like, getLastAddedConfiguration, but this seems really brittle.
What is the general approach in this situation?
the ID's are generated by the repository
You could remove this extra complexity. Since Configuration is an entity of the Instance aggregate, its Id only needs to be unique inside the aggregate, not across the whole application. Therefore, the easiest is that the Aggregate assigns the ConfigurationId in the Instance.addConfiguration method (as the aggregate can easily ensure the uniqueness of the new Id). This method can return the new ConfigurationId (or the whole object with the Id if necessary).
What is the general approach in this situation?
I'm not sure about the general approach, but in my opinion, the sooner you create the Ids the better. For Aggregates, you'd create the Id before storing it (maybe a GUID), for entities, the Aggregate can create it the moment of creating/adding the entity. This allows you to perform other actions (eg publishing an event) using these Ids without having to store and retrieve the Ids from the DB, which will necessarily have an impact on how you implement and use your repositories and this is not ideal.
I have two bounded contexts which lead into two micro services
PersonalManagement
DocumentStorage
I keep the entity model simple here.
PersonalManagement:
Entity/Table Person:
#id - int
tenantId - int
name - string
...
DocumentStorage
Entity/Table Document:
#id - int
tenantId - int
personId - int
dateIssued - string
...
You need to know that before the application is started - a company (tenant) is choosen to define the company context.
I want to store a new document by using REST/JSON.
This is a POST to /tenants/1/persons/5/documents
with the body
{
"dateIssued" : "2018-06-11"
}
On the backend side - I validate the input body.
One validation might be "if the person specified exists and really belongs to given tenant".
Since this info is stored in the PersonalManagement-MicroService, I need to provide an operation like this:
"Does exists (personId=5,tenantId=1)"
in PersonalManagement to ensure consistence since caller might be evil.
Or in general:
What is best practise to check "ownership" of entities cross database in micro services
It might also be an option that if a new person is created (tenantId,personId) this information is stored additionally(!) in DocumentStorage but wanna avoid this redundancy.
I'm not going to extend this answer into whether your bounded contexts and service endpoints are well defined since your question seems to be simplifying the issue to keep a well defined scope, but regarding your specific question:
What is best practise to check "ownership" of entities cross database in micro services
Microservice architectures use strive for a "share nothing" principle. And that usually extends from code base to data base. So you're right to assume you're checking for this constraint "cross-DB" in your scenario.
You have a few options on this particular case, each with their set of drawbacks:
1) Your proposed "Does exists (personId=5,tenantId=1)" call from the DocumentContext to the PersonContext is not wrong on itself, but you will generate a straight dependency between these two microservices, so you must ask yourself whether it seems ok for you not to accept new documents if the PersonManagement microservice is offline.
In specific situations, such dependencies might be acceptable but the more of these you have, the less your microservice architecture will behave as one and more like a "distributed monolith" which on itself it pretty much an anti-pattern.
2) The other main option you have is that you should recognize that the DocumentContext is a very much interested in some information/behavior relating to People so it should be ok with modelling the Person Entity inside its boundaries.
That means, you can have the DocumentContext subscribe for changes in the PersonContext to be aware of which People currently exist and what their characteristics are and thus being able to keep a local copy of such information.
That way, your validation will be kept entirely inside the DocumentContext which will have its operation unhindered by eventual issues with the PersonContext and you will find out your modelling of the document related entities will be much cleaner than before.
But in the end, you will also discover that a "share nothing" principle usually will cost you in what seems to be redundancy, but it's actually independence of contexts.
just for the tenancy check , this can be done using the JWT token (token which can store tenancy information and other metadata).
Let me provide another example of the same scenario which can't be solved with JWT.
Assume one Customer wants to create a Order and our system wants to check whether the customer exist or not while creating the order.
As Order and Customer service are separate, and we want minimal dependencies between them, there are multiple sol. to above problems:
create Order in "validating state" and on OrderCreated event check for customer validity and update customer state to "Valid"
another one before creating order check for the customer (which is not the right way as it creates dependency, untill and unless very critical do not do it)
last way is the let the order be created , somebody who will final check the order for delivery will verify customer will remove
First time I think about it...
Until now, I always used the natural key in my API. For example, a REST API allowing to deal with entities, the URL would be like /entities/{id} where id is a natural key known to the user (the ID is passed to the POST request that creates the entity). After the entity is created, the user can use multiple commands (GET, DELETE, PUT...) to manipulate the entity. The entity also has a surrogate key generated by the database.
Now, think about the following sequence:
A user creates entity with id 1. (POST /entities with body containing id 1)
Another user deletes the entity (DELETE /entities/1)
The same other user creates the entity again (POST /entities with body containing id 1)
The first user decides to modify the entity (PUT /entities/1 with body)
Before step 4 is executed, there is still an entity with id 1 in the database, but it is not the same entity created during step 1. The problem is that step 4 identifies the entity to modify based on the natural key which is the same for the deleted and new entity (while the surrogate key is different). Therefore, step 4 will succeed and the user will never know it is working on a new entity.
I generally also use optimistic locking in my applications, but I don't think it helps here. After step 1, the entity's version field is 0. After step 3, the new entity's version field is also 0. Therefore, the version check won't help. Is the right case to use timestamp field for optimistic locking?
Is the "good" solution to return surrogate key to the user? This way, the user always provides the surrogate key to the server which can use it to ensure it works on the same entity and not on a new one?
Which approach do you recommend?
It depends on how you want your users to user your api.
REST APIs should try to be discoverable. So if there is benefit in exposing natural keys in your API because it will allow users to modify the URI directly and get to a new state, then do it.
A good example is categories or tags. We could have these following URIs;
GET /some-resource?tag=1 // returns all resources tagged with 'blue'
GET /some-resource?tag=2 // returns all resources tagged with 'red'
or
GET /some-resource?tag=blue // returns all resources tagged with 'blue'
GET /some-resource?tag=red // returns all resources tagged with 'red'
There is clearly more value to a user in the second group, as they can see that the tag is a real word. This then allows them to type ANY word in there to see whats returned, whereas the first group does not allow this: it limits discoverability
A different example would be orders
GET /orders/1 // returns order 1
or
GET /orders/some-verbose-name-that-adds-no-meaning // returns order 1
In this case there is little value in adding some verbose name to the order to allow it to be discoverable. A user is more likely to want to view all orders first (or a subset) and filter by date or price etc, and then choose an order to view
GET /orders?orderBy={date}&order=asc
Additional
After our discussion over chat, your issue seems to be with versioning and how to manage resource locking.
If you allow resources to be modified by multiple users, you need to send a version number with every request and response. The version number is incremented when any changes are made. If a request sends an older version number when trying to modify a resource, throw an error.
In the case where you allow the same URIs to be reused, there is a potential for conflict as the version number always begins from 0. In this case, you will also need to send over a GUID (surrogate key) and a version number. Or don't use natural URIs (see original answer above to decided when to do this or not).
There is another option which is to disallow reuse of URIs. This really depends on the use case and your business requirements. It may be fine to reuse a URI as conceptually it means the same thing. Example would be if you had a folder on your computer. Deleting the folder and recreating it, is the same as emptying the folder. Conceptually the folder is the same 'thing' but with different properties.
User account is probably an area where reusing URIs is not a good idea. If you delete an account /accounts/u1, that URI should be marked as deleted, and no other user should be able to create an account with username u1. Conceptually, a new user using the same URI is not the same as when the previous user was using it.
Its interesting to see people trying to rediscover solutions to known problems. This issue is not specific to a REST API - it applies to any indexed storage. The only solution I have ever seen implemented is don't re-use surrogate keys.
If you are generating your surrogate key at the client, use UUIDs or split sequences, but for preference do it serverside.
Also, you should never use surrogate keys to de-reference data if a simple natural key exists in the data. Indeed, even if the natural key is a compound entity, you should consider very carefully whether to expose a surrogate key in the API.
You mentioned the possibility of using a timestamp as your optimistic locking.
Depending how strictly you're following a RESTful principle, the Entity returned by the POST will contain an "edit self" link; this is the URI to which a DELETE or UPDATE can be performed.
Taking your steps above as an example:
Step 1
User A does a POST of Entity 1. The returned Entity object will contain a "self" link indicating where updates should occur, like:
/entities/1/timestamp/312547124138
Step 2
User B gets the existing Entity 1, with the above "self" link, and performs a DELETE to that timestamp versioned URI.
Step 3
User B does a POST of a new Entity 1, which returns an object with a different "self" link, e.g.:
/entities/1/timestamp/312547999999
Step 4
User A, with the original Entity that they obtained in Step 1, tries doing a PUT to the "self" link on their object, which was:
/entities/1/timestamp/312547124138
...your service will recognise that although Entity 1 does exist; User A is trying a PUT against a version which has since become stale.
The service can then perform the appropriate action. Depending how sophisticated your algorithm is, you could either merge the changes or reject the PUT.
I can't remember the appropriate HTTP status code that you should return, following a PUT to a stale version... It's not something that I've implemented in the Rest framework that I work on, although I have planned to enable it in future. It might be that you return a 410 ("Gone").
Step 5
I know you don't have a step 5, but..! User A, upon finding their PUT has failed, might re-retrieve Entity 1. This could be a GET to their (stale) version, i.e. a GET to:
/entities/1/timestamp/312547124138
...and your service would return a redirect to GET from either a generic URI for that object, e.g.:
/entities/1
...or to the specific latest version, i.e.:
/entities/1/timestamp/312547999999
They can then make the changes intended in Step 4, subject to any application-level merge logic.
Hope that helps.
Your problem can be solved either using ETags for versioning (a record can only modified if the current ETag is supplied) or by soft deletes (so the deleted record still exists but with a trashed bool which is reset by a PUT).
Sounds like you might also benefit from a batch end point and using transactions.
I'm trying to do a merge statement. I'm trying to get the node with his node id, the problem that merging doesn't allow me to use 'where id(node)=nodeId'.
Something like:
merge( user:User) where id(user)=111
on create set user =
{facebookId:"13",name:"",gender:"",pushId:"", picoAccessToken:"",
accessTokenExpires:""}
on match set user +=
{facebookId:"13",name:"",gender:"",pushId:"",picoAccessToken:"",
accessTokenExpires:""}
return user
General question, should I use the node id to retrieve nodes? or should I add an id property to the node.
You can't use the "ID()" function with MERGE because it behaves as "MATCH or CREATE" and you can't assign internal ids manually to nodes.
On the other hand, YES! it is better to use an id generation at the application level that you will assign to nodes OR use the GraphAware UUID plugin that will do it for you https://github.com/graphaware/neo4j-uuid/tree/master.
Relying on nodes internal ids is considered as bad practice as ids of deleted nodes are reused in the database lifecycle.
Michael Hunger from Neo Technology has already answered the question regarding internal ids here on StackOverflow.
The bottom line is that the node id should not be used, rather they should be considered an implementation detail.
So, the recommended approach is to use your own id property*. This can be a UUID, some kind of counter or whatever id that is suitable for your project.
Personally I like the UUID-approach. They are easy to generate from any language (e.g. the UUID-class in Java). The are unique and it is obvious that the are generated.