In MDriven Enterprise Information, why is there a Processes Hierarchy and also a Processes Tree? - mdriven

In MDriven Enterprise Information, why is there a Processes Hierarchy and also a Processes Tree? Aren’t they the same thing? Is this not redundant duplication?

Processes are defined by their steps.
A Process step can make use of another processes - ie defining a sub-process.
To both have the complete list of processes and the resulting expanded tree of process and their sub-processes and even their sub-processes (infinity) we added two nodes.
First node is the complete straight list of existing processes regardless if they are sub-processes or not.
Second node is the constructed tree of whom uses who - and a sub-process can then show up multiple times in this tree.
Notice that ApproveNewBulk is re-used in 2 processes in the example below.

Related

Set Permissions on Folder without Processing Sub Files and Folders

I have two related issues that I am trying to solve. I want to update an ACL on a folder without it processing all of the children. My first example is when setting "This folder only" on a high level file, setting the permissions takes forever because it processes all of the children. The second example is that I have a file system were Everyone is directly applied to each item and I need to remove the entry without taking the time to process the children. I have used Get/Set-ACL and Get/Remove/Add-NTFSAccess, but cannot figure out how to stop the processing of the children objects.
NTFS inheritance is similar to genetic inheritance. As we, as children, get to blame our parents for all our faults whether they like it or not, child objects decide whether they inherit permissions from parent objects.
You cannot, using NTFS, change a permission on a parent object and tell it not to allow propagation to child objects.

How to force PerfView to collect ETW events coming only from one process

I know there is a /Process:NameOrPID switch but it affects only /StopXXX commands. Collecting ETW events from all processes leads to big *.ETL file. I am trying to be able to catch ETW events only from one process in order to avoid polluting the output file with non relevant ETW events.
Updated 2019-04-14.
Now there is a way to do that. Please use /focusProcess=ProcessIDOrName option available in PerfView 2.0.32 (also available in UI starting from 2.0.39).
If you know the names of the ETW providers emitting events from your process you can filter the process when specifying providers in the Additional Providers text box, or in the -Providers or -OnlyProviders command line arguments to perfview.
From PerfView's docs:
The Additional Providers TextBox - A comma separated list of specifications for providers. This can be specified by using the (the ... button) or by the following textual specification. Each provider specification has the general form of provider:keywords:level:values. The keyword and levels specification parts are optional and can be omitted (For example provider:keywords:values or provider:values is legal).
Process filters occur in the values section. Relevant portions from the docs:
values - this is a list of semicolon-separated values KEY=VALUE, which are used to pass extra information to the provider or to the ETW system. KEY values that begin with an # are commands to the ETW system. Everything else is passed on the the provider (EventSources have direct support for accepting this information in its OnEventCommand method). The special ETW keywords include
#ProcessIDFilter - a space separated list of decimal process IDs to collect data from. Only events from these processes (or those named in the #ProcessNameFilter) will be collected. Since IDs only exist after a process is created, this only works on processes that are running at the time collection starts.
#ProcessNameFilter - a space separated list of process names (a process name is the file name (no path) of the executable INCLUDING the .EXE extension). Only events from the names processes (or those named in the #ProcessIDFilter) will be collected. It does not matter if the process was running before collection or not.
So, if I have an ETW provider named my-provider running in a process named my.process.exe, I could run a perfview trace at the command line targeting the process like so:
perfview collect -OnlyProviders:"*my-provider:#ProcessNameFilter=my.process.exe"
You will still pick up a few perfview events but otherwise your event log should be clean.

SSIS Child Packages not starting at the same time

I have a Database Project inside of SSDT 2012 that contains a SSIS project using the package deployment model. The goal of the project is to load a lot of information at once that would normally take too much time if one package did it. So I divided it between 15 children each doing their on separate part, loading data into various sql tables. So, inside this project is one parent package and 15 child packages. Because of the type of data that is loading, I have to use script task to insert it all. Each child package is the same, only differing between parameters that divide the data up between the children. Each child package is executed using a External Reference through the File System.
The problem I'm having is while the parent package is supposed to start all the child packages at once, not all of the children are starting. It's as if there is a limit to how many packages can start at one time (looks like about 10 or 11). Once it hits this limit, the rest don't start. But when one package finishes, another immediately starts.
Is there a property I'm missing that is limiting how packages can run at the same time? Based on what others are able to run at the same time, there seems to be something I'm missing. I read somewhere memory can be a factor, but when I look at Task Manager, I don't see anything above 15% of my memory used.
The problem is solved by looking at the the property MaxConcurrentExecutables on the parent package. In my parent package, this property had a default value of -1, which means it calculates how many tasks that run in parallel (in this case, child packages) based on the number of cores on your PC plus 2.
In my case I have 8 cores on my laptop, plus the number 2 which put me at 10 packages running at the same time. You can override this value by putting a higher positive number in its place to allow more children to run. After putting in 20, all tasks started at once.
More information about this can found here:
https://andrewbutenko.wordpress.com/2012/02/15/ssis-package-maxconcurrentexecutables/

How to connect separate processes under the same project (jBPM)

My team is new to developing these things and I came into a project that is defining an over-arching workflow using separate processes that are all defined under the same project. So it appears that right now the processes defined are all discrete units, and the plan was to connect these units together using inputs and outputs.
Based on the documentation it looks like the best-practicey way of doing this would be to define the entire, over-arching workflow using sub-process tasks.
So I wonder:
Is the implementation we've started workable?
or
Should I only have one process unit per one workflow, which defines sub-processes if the workflow is too complicated and has discrete parts?
It's fine to separate out certain parts of the process into its own process, and then call those from some sort of parent process. The task you should use in the parent process is called reusable sub-process, or call activity. It's absolutely fine to have multiple processes in the same project.

Dependency checks in JBPM workflows

All,
I am using JBoss JBPM and Drools.
The workflows are loaded to the KnowlwegdeBuilder as resources. There are multiple sub-processes (or child processes) that are invoked from parent processes.
is there any way to check if these child-processes were loaded prior to loading the parent process?
i.e. some kind of parent-child dependency check.
Reason is - if there are any missing sub-processes (child processes) - I know about this only at run time (when my workflow is actually running), is there a way to determine this prior to actually firing the workflow?
regards
D
There is nothing to check that I'm afraid. Because the sub process reference can be a process variable which can also be calculated at runtime it is something difficult to check. Parsing the Process to find the sub process references if they are not variables can be done fairly easy.
Which version of jBPM are you using?
Cheers