JBPM using exclusive gateway with data modeler variables conditions - subprocess

Environment: JBPM 6
want to achieve: Reusable Sub Process
facing challenges in :
a. Passing variables values, from parent subprocess to child subprocess and vice versa
b. Sequence flow condition, using only one object variable for condition check
I am trying to create a reusable sub process in JBPM. This sub process will be called only in one condition, that is when the "userid" variable is empty. I facing two challenges
1.Gateway Condition,
a. If userid variable of object user is empty then sub process will be called.
b. If userid variable of object user is not empty then it will not call the sub process.
I have achieved the above using the variables type of string(without using the object) in sequence flow conditions, but when I try to do the same with objects variables from the data modeler, only one option is available that is: "if object is null" in sequence flow condition.
My requirement: Instead of checking the complete object only one variable(userid) of object "User" will be checked.
how to pass the child subprocess variables values back to the parent subprocess and vice versa
Please help

The constraint "editor" only allows basic conditions and currently doesn't allow you to specify constraints on custom objects (other than is null). For more advanced constraints like the one you mention, you can switch to the script tab and type the expression yourself.
For mapping data input and output between parent and sub-process, you simply need to define input and output mappings on the call activity in the parent process. Note that you might have to define additional data inputs / outputs first for the variables you would like to map. A simple example:
https://github.com/droolsjbpm/jbpm/blob/master/jbpm-bpmn2/src/test/resources/BPMN2-CallActivity.bpmn2h

Related

How to identify the multi-instance sub-process and differentiate it from the main process in Jbpm?

I have used one multi instance subprocess which includes an workflow with human task. When executing, its creating the number of human tasks as to the number of elements present inside the collection object. But all tasks have same process instance id. How the relation is working between parent process and multi instance subprocess?
If there are multiple elements in collection list, then it will create those many tasks inside the multi instance sub process. As all the tasks have same process instance id, how to identify the respective process variable values for each task and the uniqueness of each flow afterwards? And is there a way to make it create an different instance id for each task of the multi instance subprocess?
I did not get all the question, but I will try to answer what I got:
Human tasks have their own task instance id
What is collection object? If you mean tasks in bpmn model, then it is as expected: process instance flow starts after start node and when it reaches a human task, it will create an task instance with id. You can see it in the tasks in UI and with api you can claim, work on, complete , populate data etc.
it is wise to have a separate/different variable for every tasks that can execute in parallel. Then the input will be kept in distinguished data placeholders and you can use it accordingly.
you can create a different instance(task instance) for each task or have repeatable tasks
well the answer was to put the multi-instance into a sub-process, this will allow me to have a separate process instance id per each element of the my List (the input of the multi-instance )

Assigning bins to records in CHAID model

I built a custom CHAID tree in SPSS modeler. I would like to assign the particular terminal nodes to all of the records in the dataset. How would I go about doing this from within the software?
Assuming that you used the regular node called CHAID, if you select inside the diamond icon (created chaid model) in the tab configurations the rule identifyer, the output will add another variable called $RI-XXX that will classify all the records within the terminal nodes. Just check that option and then put a table node after that and all the records will be classified.
You just need to apply the algorithm to whatever data set you need, and you only need to inputs to be the same (type and eventually storage).
The diamond contains the algo and you can disconnect it and connects to whatever you want.
http://beyondthearc.com/blog/wp-content/uploads/2015/02/spss.png

How does Activiti dynamic assignment of candidate user work?

There is a way to pass the candidate users dynamically to Activiti workflow as described in .
How do I pass a list of candidate users to an activiti workflow task in alfresco?
When candidateUser/candidateGroup is set for a UserTask using a variable, when is the expression evaluated ? Is the task id -> user/group persisted in database for fast query of like, list all the tasks a particular use can claim ? What table is it stored in ?
When human tasks are created there are two distinct events that fire.
Create : When the task itself is created and most of the task metadata is associated with the task.
Assign : When the task assignment is evaluated and the task is assigned to either an assignee or candidateGroup.
As such, the candidateGroup expression is evaluated during the assign phase.
This means we can easily manipulate the list of candidates based on a rule, database result or some other business logic prior to the task actually being assigned using a task listener that fires on the create phase.
Hope this helps,
G
Concerning the "What table is it stored in ?" part of your question:
Candidate start groups/users for a given task or process are stored in the ACT_IDENTITY_LINK table.

BIRT: Using information from one Dataset as parameter of an other

i'm creating some BIRT-Reports with Eclipse. Now i got the following problem.
I've got two datasets (Set one named diag, set two named risk). In my report i produce fpr every data in diag a region with an diag_id. Now i tried to use this diag_id as input parameter for the second dataset (risk). Is this possible, and how is this possible?
To link one dataset to another in BIRT, you can either:
Create a subreport within your report that links one dataset to another via an input parameter - see this Eclipse tutorial.
or:
Create a joint dataset that explicitly links the two datasets together - see the answer to this StackOverflow question.
Alternatively, if both datasets come from the same relational database, you could simply combine the two queries into a single query.
If you are using scripted data sources, you could use variables.
Add a variable through the Eclipse UI called "diag_id".
In the fetch script of diag, set diag_id:
vars["diag_id"] = ...; // store value in Variable.
Then, in the open script of risk, use the diag_id however you need to.
diag_id = vars["diag_id"];
This implies that placement of risk report elements are nested inside the diag repeating element so that diag.fetch will happen before each risk.open.

How to make an InArgument's value dependant upon the value of another InArgument at design time

I have a requirement to allow a user to specify the value of an InArgument / property from a list of valid values (e.g. a combobox). The list of valid values is determined by the value of another InArgument (the value of which will be set by an expression).
For instance, at design time:
User enters a file path into workflow variable FilePath
The DependedUpon InArgument is set to the value of FilePath
The file is queried and a list of valid values is displayed to the user to select the appropriate value (presumably via a custom PropertyValueEditor).
Is this possible?
Considering this is being done at design time, I'd strongly suggest you provide for all this logic within the designer, rather than in the Activity itself.
Design-time logic shouldn't be contained within your Activity. Your Activity should be able to run independent of any designer. Think about it this way...
You sit down and design your workflow using Activities and their designers. Once done, you install/xcopy the workflows to a server somewhere else. When the server loads that Activity prior to executing it, what happens when your design logic executes in CacheMetadata? Either it is skipped using some heuristic to determine that you are not running in design time, or you include extra logic to skip this code when it is unable to locate that file. Either way, why is a server executing this design time code? The answer is that it shouldn't be executing it; that code belongs with the designers.
This is why, if you look at the framework, you'll see that Activities and their designers exist in different assemblies. Your code should be the same way--design-centric code should be delivered in separate assemblies from your Activities, so that you may deliver both to designers, and only the Activity assemblies to your application servers.
When do you want to validate this, at design time or run time?
Design time is limited because the user can use an expression that depends on another variable and you can't read the value from there at design time. You can however look at the expression and possibly deduce an invalid combination that way. In this case you need to add code to the CacheMetadata function.
At run time you can get the actual values and validate them in the Execute function.