I know that individual nodes can be versioned in JCR repositories. But what if I have documents that are comprised of a hierarchy of small individual snippets, aliases to nodes in larger snippet pool. I'd like to take a snapshot of all the nodes in a document and share information of the current state of the hierarchy among all clients that are editing the document. Is that possible with JCR and can you give me some pointers on how to do it?
Yes, it is possible (and quite easy) to version a subgraph in its entirety. The node at the top of the subgraph should have the jcr:versionable mixin, either explicitly or implicitly via a supertype of the node's primary type or mixin types. No other node in the subgraph needs to be marked as versionable; in fact, it's far easier if none of them do have it.
By default, all "on parent versioning" behavior of property definitions and child node definitions are COPY, which will work perfectly.
Then, simply get the VersionManager for the JCR Session (via the session's workspace), and check in the node:
javax.jcr.Node subgraphRoot = ...
javax.jcr.VersionManager vmgr = session.getWorkspace().getVersionManager();
vmgr.checkin(subgraphRoot.getPath());
Checking it out to modify it is just as simple:
vmgr.checkout(subgraphRoot.getPath());
Related
I'm wondering why the spec for SimpleAttributeOperand uses a browsePath list and not just a NodeId as the parameter.
From my understanding, NodeId would uniquely identify a node object but instead, we use a browse path list as specified for SimpleAttributeOperand which would lead to possible duplicates. In other words, two nodes can have the same browse name.
Am I correct in thinking this is an issue?
SimpleAttributeOperand is typically used to define the fields of an EventFilter.
The BrowsePath defines an element of the event relative to itself. You define, for example, that you wish to monitor all events from the server, and for each of them, you wish to get the Message. Alarms use nodes that contain the Message, but it's a different node for each alarm - and simple events don't define any nodes in the address space. So you don't want to use NodeIds for defining that, in practice.
Vice versa would work as well. I looked in RFC#6020 but cannot think of how to do this. The container contains 4 leaves. I was planning to do a deviate replace, but am unable to figure out how the container could be added.
You can use the deviate not-supported to remove schema nodes and the augment statement to add schema nodes. However, since the augmenting nodes will be namespace-prefixed by the module identifier of the augmenting module, there's no way to change, say, a leaf /example:foo into a container /example:bar.
What do you need this for?
I have a collection in my Cosmos database that I would like to observe for changes. I have many documents (official and unofficial) explaining how to do this. There is one thing though that I cannot get to work in a reliable way: how do I receive the same changes to multiple instances when I don't have any common reference for instance names?
What do I mean by this? Well, I'm running my work loads in a Kubernetes cluster (AKS). I have a variable number of instances within the cluster that should observe my collection. In order for change feeds to work properly, I have to have a unique instance name for each instance. The only candidate I have is the pod name. It's usually on the form of <deployment-name>-<random string>. E.g. pod-5f597c9c56-lxw5b.
If I use the pod name as instance name, all instances do not receive the same changes (which is my requirement), only one instance will receive the change (see https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed-processor#dynamic-scaling). What I can do is to use the pod name as feed name instead, then all instances get the same changes. This is what I fear will bite me in the butt at some point; when peek into the lease container, I can see a set of documents per feed name. As pod names come and go (the random string part of the name), I fear the container will grow over time, generating a heap of garbage. I know Cosmos can handle huge work loads, but you know, I like to keep things tidy.
How can I keep this thing clean and tidy? I really don't want to invent (or reuse for that matter!) some protocol between my instances to vote for which instance gets which name out of a finite set of names.
One "simple" solution would be to build my own instance names, if AKS or Kubernetes held some "index" of some sort for my pods. I know stateful sets give me that, but I don't want to use stateful sets, as the pods themselves aren't really stateful (except for this particular aspect!).
There is a new Change Feed pull model (which is in preview at this time).
The differences are:
In your case, it looks like you don't need parallelization (you want all instances to receive everything). The important part would be to design a state storing model that can maintain the continuation tokens (or not, maybe you don't care to continue if a pod goes down and then restarts).
I would suggest that you proceed to use the pod name as unique ID. If you are concerned about sprawl of the data, you could monitor the container and devise a clean-up mechanism for the metadata.
In order to have at-least-once delivery, there is going to need to be metadata persisted somewhere to track items ACK-ed / position in a partition, etc. I suspect there could be a bit of work to get change feed processor to give you at-least-once delivery once you consider pod interruption/re-scheduling during data flow.
As another option Azure offers an implementation of checkpoint based message sharing from partitioned event hubs via EventProcessorClient. In EventProcessorClient, there is also a bit of metadata added to a storage account.
I am trying to deviate a list to container in yang.
I tried going through RFC#6020, and I couldn't figure-out how to deviate on the container level.
You can't change the type of a yang node.
However, you can use deviate not-supported on the list and deviate add on the parent node to create a new container with the same name as the list. The child elements of the list will be lost.
What you try to achieve is not a common use case and deviation modules are not designed to do what you request. A deviation should be used as a last resort to let netconf clients know that some part of the yang modules is not supported or only with restrictions.
You can't do it. But you can use deviate not-supported on the list and augment a new container to the parent node. 'deviate' statment only can modify the poperties of data node ,but it can not add a new data node.
Following this post from Neo4j's google group I have to say that I don't see any benefits when using this multiple-label-thing but rather, on the contrary, IMHO it just adds complexity for what a uniqueness constraint is. It could also tempt the user to introduce inheritance into the data model which would cause frustration since that's not possible at all...
Labels have not the notion of just representing a type, they are rather roles which are viable in different contexts.
So in one role, certain attributes and relationships of a node might matter and in another role (label) a different set (that might intersect with the first one).
We stayed away from inheritance as it opens a new can of worms, and we favor composition. So you'd rather compose a node whole as the sum of its parts. You can also mimic an inheritance by also attaching the "super"-types as labels to the child elements in your hierarchy.
Node labels can also be used to separate subgraphs in a larger graph, e.g. label the proteins that are active in human pathways and phylo pathways with those labels. So you can quickly select a part of the graph that you're interested in.
Those separate subgraphs can also come from different domains, like geo,social,catalogue,supplier that are combined in a single graph.
And multiple labels also make sense to separate "technical" namespaces of your graph that are used to represent "in-graph-indexes" from your "domain"-labels.
Regarding uniqueness - all uniqueness constraints for the existing labels and properties on your nodes are enforced at the same time. If they cannot be resolved on insert or update the operation will fail.