I tried select_node, activate_node, open_node events and all these events update the corresponding node id in db when they are triggered.
Scenario:
If a node has many nested child nodes and if I move this parent node into another sibling node, the child nodes id may change in the DOM and i cant update those nodes id into db currently. Thus creating data error.
How can I avoid this kind of data error in jstree, when persisting node ids to db.
I really appreciate your help. Thank you,
Related
I'm experimenting with microservices architecture. I have UserService and ShoppingService.
In UserService I'm using MongoDb. When I'm creating new user in UserService I want to sync basic user info to ShoppingService. In UserService I'm using something like event sourcing. When I'm creating new User, I first create the UserCreatedEvent and then I apply the event onto domain User object. So in the end I get the domain User object that has current state and list of events containing one UserCreatedEvent.
I wonder if I should persist the Events collection as a nested property of User document or in separate UserEvents collection. I was planning to use Kafka Connect to synchronize the events from UserService to ShoppingService.
If I decide to persist the events inside the User document then I don't need transaction that I would use to save event to separate UserEvents collection but I can't setup the Kafka connector to track changes in the nested property only.
If I decide to persist events in separate UserEvents collection I need to wrap in transaction changes to User and UserEvents. But saving events to separate collection makes setting up Kafka connector very easy because I track only inserts and I don't need to track updates of nested UserEvents array in User document.
I think I will go with the second option for sake of simplicity but maybe I've missed something. Is it good idea to implement it like this?
I would generally advise the second approach. Note that you can also eliminate the need for a transaction by observing that User is just a snapshot based on the UserEvents up to some point in the stream and thus doesn't have to be immediately updated.
With this, your read operation for User can be: select a user from User (the latest snapshot), which includes a version/sequence number saying that it's as-of some event; then select the events with later sequence numbers and apply those events to the user. If there's some querier which wants a faster response and can tolerate getting something stale, a different endpoint (or an option in the query) can bypass the event replay.
You can then have some asynchronous process which subscribes to the stream of user events and updates User based on those events.
I am using an ordered query of the Firebase real-time database. I have a .childMoved listener on the query and when someone's index in the ordered list changes my listener gets fired. However there doesn't seem to be a way to know what the new index of the object is.
rtdb.child(refString).queryOrdered(byChild: "queuePosition")
.observe(.childMoved, with: { snapshot in
// Do something here with snapshot data
}) { error in
// error
}
How can I find out where the object should be moved to? Or should I just do sorting on the client?
The Firebase Database doesn't expose indexes, since those don't scale well in a multi-user environment. It does have an option to pass the key of the previous sibling of the node with observe: andPreviousSiblingKey.
With this key you can look up the sibling node, and move the child node after that.
I have a node-red code that works in the following way:
It receives a message (json form) and saves it to cloudant DB
Then I can make an http call where I can see all the contents of the DB
This is all good, but the problem is that when it saves it to cloudant, it gives it a random _id, so the order of the documents in the DB isn't the same as the order they came in, but random.
Is there a way to maybe set the _id while saving in node red? Or is there another solution?
I just want that when I call the http it shows it in the order that it came in (last to first, or first to last, doesn't matter).
You can set the _id with a function node or a change node before passing it to the Cloudant out node.
But if you just want them in the order they arrived then add the timestamp field and make the query node use a view that sorts the documents by the timestamp
Usecase: Suppose I have the following aggregates
Root aggregate - CustomerRootAggregate (manages each CustomerAggregate)
Child aggregate of the Root aggregate - CustomerAggregate (there are 10 customers)
Question: How do I send DisableCustomer command to all the 10 CustomerAggregate to update their state to be disabled ?
customerState.enabled = false
Solutions: Since CQRS does not allow the write side to query the read side to get a list of CustomerAggregate IDs I thought of the following:
CustomerRootAggregate always store the IDs of all it's CustomerAggregate in the database as json. When a Command for DisableAllCustomers is received by CustomerRootAggregate it will fetch the CustomerIds json and send DisableCustomer command to all the children where each child will restore it's state before applying DisableCustomer command. But this means I will have to maintain CustomerIds json record's consistency.
The Client (Browser - UI) should always send the list of CustomerIds to apply DisableCustomer to. But this will be problematic for a database with thousands of customers.
In the REST API Layer check for the command DisableAllCustomers and fetch all the IDs from the read side and sends DisableAllCustomers(ids) with IDs populated to write side.
Which is a recommended approach or is a better approach ?
Root aggregate - CustomerRootAggregate (manages each CustomerAggregate)
Child aggregate of the Root aggregate - CustomerAggregate (there are 10 customers)
For starters, the language "child aggregate" is a bit confusing. Your model includes a "parent" entity that holds a direct reference to a "child" entity, then both of those entities must be part of the same aggregate.
However, you might have a Customer aggregate for each customer, and a CustomerSet aggregate that manages a collection of Id.
How do I send DisableCustomer command to all the 10 CustomerAggregate to update their state to be disabled ?
The usual answer is that you run a query to get the set of Customers to be disabled, and then you dispatch a disableCustomer command to each.
So both 3 and 2 are reasonable answers, with the caveat that you need to consider what your requirements are if some of the DisableCustomer commands fail.
2 in particular is seductive, because it clearly articulates that the client (human operator) is describing a task, which the application then translates into commands to by run by the domain model.
Trying to pack "thousands" of customer ids into the message may be a concern, but for several use cases you can find a way to shrink that down. For instance, if the task is "disable all", then client can send to the application instructions for how to recreate the "all" collection -- ie: "run this query against this specific version of the collection" describes the list of customers to be disabled unambiguously.
When a Command for DisableAllCustomers is received by CustomerRootAggregate it will fetch the CustomerIds json and send DisableCustomer command to all the children where each child will restore it's state before applying DisableCustomer command. But this means I will have to maintain CustomerIds json record's consistency.
This is close to a right idea, but not quite there. You dispatch a command to the collection aggregate. If it accepts the command, it produces an event that describes the customer ids to be disabled. This domain event is persisted as part of the event stream of the collection aggregate.
Subscribe to these events with an event handler that is responsible for creating a process manager. This process manager is another event sourced state machine. It looks sort of like an aggregate, but it responds to events. When an event is passed to it, it updates its own state, saves those events off in the current transaction, and then schedules commands to each Customer aggregate.
But it's a bunch of extra work to do it that way. Conventional wisdom suggests that you should usually begin by assuming that the process manager approach isn't necessary, and only introduce it if the business demands it. "Premature automation is the root of all evil" or something like that.
I have a widget that creates a POST request that creates a node and a dynamic number of subnodes, like:
./sling:resourceType:app/component
_charset_:utf-8
:status:browser
./data:data
./a/a:one
./a/b:two
./b/a:one
./b/b:two
This works nice the first time. I get a node along with subnodes a and b.
The problem is in subsequent requests. I need all subnodes to be removed before creating the new ones. So if previously I created subnodes a,b,c and d, the previous request would result just in subnodes a and b to remain.
I know the suffix #Delete,but I would need to know in advance which subnodes need to be deleted, which I don't.
Can this be achieved OOTB with the Sling Post Servlet?
Greetings.
In case you are using CQ 5.6 or 5.6.1 you can use ':applyTo' request parameter to delete multiple items using a single request by passing a trailing star in its value.
For example, to delete all the children of '/content/foo', make a POST request with ':operation' = 'delete', and ':applyTo' = '/content/foo/*'.
$ curl -F":operation=delete" -F":applyTo=/content/foo/*" http://host/content/sample
This was introduced in Sling 2.1.2 and hence is not available in CQ 5.5 and below, as 5.5 runs on 2.1.1.
For 5.5, i suspect you might need to get the list of children and then pass the absolute URL's to the :applyTo as the value to delete them, before adding the new nodes.
The solution I'm using is use the #Delete suffix.
The problem with it is that you need to add one parameter with the #Delete suffix for every node you want to delete. What I'm doing is query the node in advance to check for all the subnodes of the node I'm updating and adding a #Delete parameter for every one of them.
So, If the JCR originally has
-node
\node1
\node2
\node3
I will first get http://example.com/content/node.json and traverse the json. Finally I will send a request with
./sling:resourceType:app/component
./value:value
./node1#Delete
./node2#Delete
./node3#Delete
./node1/value:value1
./node2/value:value2
that will update node1 and node2 while deleting node3 at the same time.