I have nodes' shared between parent and child pages through mix:shareable mixin. The synchronization of data works fine in author instance but when the pages are activated the shared nodes get created with different UUID(this breaks the sync). I wrote this piece of code to re clone the nodes when UUID are different
//parentPageNode and currentPageNode are jcr nodes of parent and current page respectively
Node parentFooNode = parentPageNode.getNode("foo");
Node currentFooNode = currentPageNode.getNode("foo");
if(!(parentFooNode.getUUID().equals(currentFooNode.getUUID()))){
log.info("parent page foo node and child page foo node have different UUID");
//revolver factory obtained through #Reference Injection
// need admin creds to delete node in publish
ResourceResolver resourceResolver = resolverFactory.getAdministrativeResourceResolver(null);
Session session = resourceResolver.adaptTo(Session.class);
Workspace workspace = session.getWorkspace();
currentfooNode.remove();
log.info("node removed");
session.save();
log.info("session saved "+currentPageNode.hasNode("foo"));
workspace.clone(workspace.getName(), parentfooNode.getPath(),currentPageNode.getPath()+"/foo", false);
session.save();
}
The problem is when this code runs, the node is not deleted. So when the node is cloned it creates foo[2] node, but the log shows "session saved false" (currentPageNode.hasNode("foo") is returning false even when it actually did not delete the node). Strangely there are no exceptions due to the remove() call.How do I delete the node when UUID id different and resynchronize child page foo node with parent page foo node on publish Instance.
Related
I am writing a Golang application using Dgraph for persisting objects. From the documentation, I can infer that a new UID and hence a new node is created everytime I mutate an object/run the code.
Is there a way to update the same node data instead for creating a new node?
I tried changing the UID to use "_:name" for the UID field but even this creates a new node everytime the application is run. I wish to be able to update the existing node if it is already present in the DB instead of creating a new node for it.
Unfortunately the docs aren't very beginner friendly yet :/
To modify / mutate existing data you have to run a set operation and supply a rdf-triple like <uid> <predicate> "value" / <objectYouWantToModify> <attributeYouWantToModify> "quotedStringValue". If it is not an attribute but an edge, the value has to be another <uid>.
The full mutation would be for example
{
set {
<0x2> <name> "modified-name" .
}
}
The . terminates the sequence and there is an optional fourth parameter you can use to also assign a label.
Check https://www.w3.org/TR/n-quads/ for further details.
I am using Flink v1.4.0 and I have set up two distinct jobs. The first is a pipeline that consumes data from a Kafka Topic and stores them into a Queryable State (QS). Data are keyed by date. The second submits a query to the QS job and processes the returned data.
Both jobs were working fine with Flink v.1.3.2. But with the new update, everything has broken. Here is part of the code for the first job:
private void runPipeline() throws Exception {
StreamExecutionEnvironment env = configurationEnvironment();
QueryableStateStream<String, DataBucket> dataByDate = env.addSource(sourceDataFromKafka())
.map(NewDataClass::new)
.keyBy(data.date)
.asQueryableState("QSName", reduceIntoSingleDataBucket());
}
and here is the code on client side:
QueryableStateClient client = new QueryableStateClient("localhost", 6123);
// the state descriptor of the state to be fetched.
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}));
jobId = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
String key = "2017-01-06";
CompletableFuture<ValueState<DataBucket> resultFuture = client.getKvState(
jobId,
"QSName",
key,
BasicTypeInfo.STRING_TYPE_INFO,
descriptor);
try {
ValueState<DataBucket> valueState = resultFuture.get();
DataBucket bucket = valueState.value();
System.out.println(bucket.getLabel());
} catch (IOException | InterruptionException | ExecutionException e) {
throw new RunTimeException("Unable to query bucket key: " + key , e);
}
I have followed the instructions as per the following link:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/state/queryable_state.html
making sure to enable the queryable state on my Flink cluster by including the flink-queryable-state-runtime_2.11-1.4.0.jar from the opt/ folder of your Flink distribution to the lib/ folder and checked it runs in the task manager.
I keep getting the following error:
Exception in thread "main" java.lang.NullPointerException
at org.apache.flink.api.java.typeutils.GenericTypeInfo.createSerializer(GenericTypeInfo.java:84)
at org.apache.flink.api.common.state.StateDescriptor.initializeSerializerUnlessSet(StateDescriptor.java:253)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:210)
at org.apache.flink.queryablestate.client.QueryableStateClient.getKvState(QueryableStateClient.java:174)
at com.company.dept.query.QuerySubmitter.main(QuerySubmitter.java:37)
Any idea of what is happening? I think that my requests don't reach the QS at all ... Really don't know if and how I should change anything. Thanks.
So, as it turned out, it was 2 things that were causing this error. The first was the use of the wrong constructor for creating a descriptor on the client side. Rather than using the one that only takes as input a name for the QS and a TypeHint, I had to use another one where a keySerialiser along with a default value are provided as per below:
ValueStateDescriptor<DataBucket> descriptor = new ValueStateDescriptor<>(
"QSName",
TypeInformation.of(new TypeHint<DataBucket>() {}).createSerializer(new ExecutionConfig()),
DataBucket.emptyBucket()); // or anything that can be used as a default value
The second was relevant to the host and port values. The port was different from v1.3.2 now set to 9069 and the localhost was also different in my case. You can verify both by checking the logs of any task manager for the line:
Started the Queryable State Proxy Server # ....
Finally, in case you are here because you are looking to allow port-range for queryable state client proxy, I suggest you follow the respective issue (FLINK-7788) here: https://issues.apache.org/jira/browse/FLINK-7788.
Is it possible to creeate Node var/foo/baar/ in one step instead of node.addNode("foo").addNode("baar");?
Resource resource = resourceResolver.getResource("/var");
Node node = resource.adaptTo(Node.class);
Node nodeOfTheFile = node.addNode("foo").addNode("baar");
JcrUtils.putFile(nodeOfTheFile ,filename, "text/csv", inputStream);
How to handle, if the nodes already exists by the creation of the nodes?
You are already using JcrUtils, so you can use one of the createPath methods. They create intermediate node if they not exists and you can even define the node type of them:
http://docs.adobe.com/docs/en/cq/current/javadoc/com/day/cq/commons/jcr/JcrUtil.html
I realize that this is a pretty specific question but I would imagine someone has run into this before. So I've got about fifty pages or so that were created about a year ago. We're trying to revamp the page with new components specifically in the header and the footer. Except the content in the main-content area will stay the same. So I'm trying to move over everything from the old pages to the new pages but just keep the main-content area. The problem is I can't just change the resource type on the old page to point to the new page components because the content is different and I'll have a bunch of nodes in the header and footer that I don't want. For example here is my current content structure:
Old Content
star-trek
jcr:content
header
nav
social
chat
main-content
column-one
column-two
footer
sign-up
mega-menu
New Content
star-wars
jcr:content
masthead
mega-menu
main-content
column-one
column-two
bottom-footer
left-links
right-links
Does anybody have any ideas on how to move just the content in the main-content node and somehow remove the other nodes. I'm trying to somehow do this programmatically cause I don't want to create 50 pages from scratch. Any help is appreciated!
You can use the JCR API to move things around at will, I would
Block users from accessing the content in question. Can be done with temporary ACLs, or by closing access on the front-end if you can.
Run a script or servlet that changes the content using JCR APIs
Check the results
Let users access the content again
For the content modification script I suggest a script that modifies a single page (i.e. you call it with an HTTP request that ends in /content/star-trek.modify.txt) so that you can run it either on a single page, for testing, or on a group of pages once it's good.
The script starts form the current node, recurses into it to find nodes that it knowns how to modify (based on their sling:resourceType), modifies them and reports what it did in the logs or on its output.
To modify nodes the script uses the JCR Node APIs to move things around (and maybe Worskpace.move).
It is indeed possible to write a code which does what you need :
package com.test;
import java.io.File;
import java.io.IOException;
import javax.jcr.ItemExistsException;
import javax.jcr.Repository;
import javax.jcr.RepositoryException;
import javax.jcr.Session;
import javax.jcr.SimpleCredentials;
import javax.jcr.Node;
import org.apache.jackrabbit.commons.JcrUtils;
import org.apache.jackrabbit.core.TransientRepository;
import org.xml.sax.SAXException;
public class test {
public void test(Document doc) throws RepositoryException {
try {
// Create a connection to the CQ repository running on local host
Repository repository = JcrUtils
.getRepository("http://localhost:4502/crx/server");
System.out.println("rep is created");
// Create a Session
javax.jcr.Session session = repository.login(new SimpleCredentials(
"admin", "admin".toCharArray()));
System.out.println("session is created");
String starTrekNodePath = "/content/path/";
String starWarsNodePath = "/content/anotherPath"
Node starTrekpageJcrNode = null;
Node starWarstext = null;
setProperty(java.lang.String name, Node value)
boolean starTrekNodeFlag = session.nodeExists(starTrekNodePath);
boolean starWarsNodeFlag = session.nodeExists(starWarsNodePath);
if (starTrekNodeFlag && starWarsNodeFlag) {
System.out.println("to infinity and beyond");
Node starTrekNode = session.getNode(starTrekNodePath);
Node starWarsNodeFlag = session.getNode(starWarsNodePath);
//apply nested looping logic here; to loop through all pages under this node
//assumption is that you have similar page titles or something
//on these lines to determine target and destination nodes
//2nd assumption is that destination pages exist with the component structures in q
//once we have the target nodes, the following segment should be all you need
Node starTrekChildNode = iterator.next();//assuming you use some form of iterator for looping logic
Node starWarsChildNode = iterator1.next();//assuming you use some form of iterator for looping logic
//next we get the jcr:content node of the target and child nodes
Node starTrekChildJcrNode = starTrekChildNode.getNode("jcr:content");
Node starWarsChildJcrNode = starWarsChildNode.getNode("jcr:content");
// now we fetch the main-component nodes.
Node starTrekChildMCNode = starTrekChildJcrNode.getNode("main-content");
Node starWarsChildMCNode = starWarsChildJcrNode.getNode("main-content");
//now we fetch each component node
Node starTrekChildC1Node = starTrekChildMCNode.getNode("column-one");
Node starTrekChildC2Node = starTrekChildMCNode.getNode("column-two");
Node starWarsChildC1Node = starWarsChildMCNode.getNode("column-one");
Node starWarsChildC2Node = starWarsChildMCNode.getNode("column-two");
// fetch the properties for each node of column-one and column-two from target
String prop1;
String prop2;
PropertyIterator iterator = starTrekChildC1Node.getProperties(propName);
while (iterator.hasNext()) {
Property prop = iterator.nextProperty();
prop1 = prop.getString();
}
PropertyIterator iterator = starTrekChildC2Node.getProperties(propName);
while (iterator.hasNext()) {
Property prop = iterator.nextProperty();
prop2 = prop.getString();
}
// and now we set the values
starWarsChildC1Node.setProperty("property-name",prop1);
starWarsChildC2Node.setProperty("property-name",prop2);
//end loops as appropriate
}
Hopefully this should set you on the right track. You'd have to figure out how you want to identify destination and target pages, based on your folder structure in /content, but the essential logic should be the same
The problem with the results I'm seeing here is that you are writing servlets that execute JCR operations to move things around. While technically that works, it's not really a scalable or reusable way to do this. You have to write some very specific code, deploy it, execute it, then delete it (or it lives out there forever). It's kind of impractical and not totally RESTful.
Here are two better options:
One of my colleagues wrote the CQ Groovy Console, which gives you the ability to use Groovy to script changes to the repository. We frequently use it for content transformations, like you've described. The advantage to using Groovy is that it's script (not compiled/deployed code). You still have access to the JCR API if you need it, but the console has a bunch of helper methods to make things even easier. I highly recommend this approach.
https://github.com/Citytechinc/cq-groovy-console
The other approach is to use the Bulk Editor tool in AEM. You can export a TSV of content, make changes, then reimport. You'll have to turn the import feature on using an administrative function, but I've used this with some success. Beware, it's a bit buggy though, with array value properties.
TL;DR version:
In CQ workflows, is there a difference between what's available to the OR Split compared to the Process Step?
Is it possible to access the /history/ nodes of a workflow instance from within an OR Split?
How?!
The whole story:
I'm working on a workflow in CQ5 / AEM5.6.
In this workflow I have a custom dialog, which stores a couple of properties on the workflow instance.
The path to the property I'm having trouble with is: /workflow/instances/[this instance]/history/[workItem id]/workItem/metaData and I've called the property "reject-or-approve".
The dialog sets the property fine (via a dropdown that lets you set it to "reject" or "approve"), and I can access other properties on this node via a process step (in ecma script) using:
var actionReason;
var history = workflowSession.getHistory(workItem.getWorkflow());
// loop backwards through workItems
// and as soon as we find a Action Reason that is not empty
// store that as 'actionReason' and break.
for (var index = history.size() - 1; index >= 0; index--) {
var previous = history.get(index);
var tempActionReason = previous.getWorkItem().getMetaDataMap().get('action-message');
if ((tempActionReason != '')&&(tempActionReason != null)) {
actionReason = tempActionReason;
break;
}
}
The process step is not the problem though. Where I'm having trouble is when I try to do the same thing from inside an OR Split.
When I try the same workflowSession.getHistory(workItem.getWorkflow()) in an OR Split, it throws an error saying workItem is not defined.
I've tried storing this property on the payload instead (i.e. storing it under the page's jcr:content), and in that case the property does seem to be available to the OR Split, but my problems with that are:
This reject-or-approve property is only relevant to the current workflow instance, so storing it on the page's jcr:content doesn't really make sense. jcr:content properties will persist after the workflow is closed, and will be accessible to future workflow instances. I could work around this (i.e. don't let workflows do anything based on the property unless I'm sure this instance has written to the property already), but this doesn't feel right and is probably error-prone.
For some reason, when running through the custom dialog in my workflow, only the Admin user group seems to be able to write to the jcr:content property. When I use the dialog as any other user group (which I need to do for this workflow design), the dialog looks as though it's working, but never actually writes to the jcr:content property.
So for a couple of different reasons I'd rather keep this property local to the workflow instance instead of storing it on the page's jcr:content -- however, if anyone can think of a reason why my dialog isn't setting the property on the jcr:content when I use any group other than admin, that would give me a workaround even if it's not exactly the solution I'm looking for.
Thanks in advance if anyone can help! I know this is kind of obscure, but I've been stuck on it for ages.
a couple of days ago i ran into the same issue. The issue here is that you don't have the workItem object, because you don't really have an existing workItem. Imagine the following: you are going through the workflow, you got a couple of workItems, with means, either process step, either inbox item. When you are in an or split, you don't have existing workItems, you can ensure by visiting the /workItems node of the workflow instance. Your workaround seems to be the only way to go through this "issue".
I've solved it. It's not all that elegant looking, but it seems to be a pretty solid solution.
Here's some background:
Dialogs seem to reliably let you store properties either on:
the payload's jcr:content node (which wasn't practical for me, because the payload is locked during the workflow, and doesn't let non-admins write to its jcr:content)
the workItem/metaData for the current workflow step
However, Split steps don't have access to workItem. I found a fairly un-helpful confirmation of that here: http://blogs.adobe.com/dmcmahon/2013/03/26/cq5-failure-running-script-etcworkflowscriptscaworkitem-ecma-referenceerror-workitem-is-not-defined/
So basically the issue was, the Dialog step could store the property, but the OR Split couldn't access it.
My workaround was to add a Process step straight after the Dialog in my workflow. Process steps do have access to workItem, so they can read the property set by the Dialog. I never particularly wanted to store this data on the payload's jcr:content, so I looked for another location. It turns out the workflow metaData (at the top level of the workflow instance node, rather than workItem/metaData, which is inside the /history sub-node) is accessible to both the Process step and the OR Split. So, my Process step now reads the workItem's approveReject property (set by the Dialog), and then writes it to the workflow's metaData node. Then, the OR Split reads the property from its new location, and does its magic.
The way you access the workflow metaData from the Process step and the OR Split is not consistent, but you can get there from both.
Here's some code: (complete with comments. You're welcome)
In the dialog where you choose to approve or reject, the name of the field is set to rejectApprove. There's no ./ or anything before it. This tells it to store the property on the workItem/metaData node for the current workflow step under /history/.
Straight after the dialog, a Process step runs this:
var rejectApprove;
var history = workflowSession.getHistory(workItem.getWorkflow());
// loop backwards through workItems
// and as soon as we find a rejectApprove that is not empty
// store that as 'rejectApprove' and break.
for (var index = history.size() - 1; index >= 0; index--) {
var previous = history.get(index);
var tempRejectApprove = previous.getWorkItem().getMetaDataMap().get('rejectApprove');
if ((tempRejectApprove != '')&&(tempRejectApprove != null)) {
rejectApprove = tempRejectApprove;
break;
}
}
// steps up from the workflow step into the workflow metaData,
// and stores the rejectApprove property there
// (where it can be accessed by an OR Split)
workItem.getWorkflowData().getMetaData().put('rejectApprove', rejectApprove);
Then after the Process step, the OR Split has the following in its tabs:
function check() {
var match = 'approve';
if (workflowData.getMetaData().get('rejectApprove') == match) {
return true;
} else {
return false;
}
}
Note: use this for the tab for the "approve" path, then copy it and replace var match = 'approve' with var match = 'reject'
So the key here is that from a Process step:
workItem.getWorkflowData().getMetaData().put('rejectApprove', rejectApprove);
writes to the same property that:
workflowData.getMetaData().get('rejectApprove') reads from when you execute it in an OR Split.
To suit our business requirements, there's more to the workflow I've implemented than just this, but the method above seems to be a pretty reliable way to get values that are entered in a dialog, and access them from within an OR Split.
It seems pretty silly that the OR Split can't access the workItem directly, and I'd be interested to know if there's a less roundabout way of doing this, but for now this has solved my problem.
I really hope someone else has this same problem, and finds this useful, because it took me waaay to long to figure out, to only apply it once!