BPMN - reusable process over several pools - enterprise-architect

I am using the Enterprise Architect and it seems like what I want to model with BPMN 2.0 is forbidden, but I just don't get it, maybe someone can help.
According to BMPN specs, an activity cannot be used in several pools, as it is always bound to one pool.
BUT activities can be marked as "call activities", which actually can have their own pools and be reused, right? Meaning if I have a sub-process marked as a call activity, using its own pool, shouldn't I be able to use this one in different pools as well?
To clarify what I need to model: In a warehouse, I have several processes, all with different pools. I need to use pools and not lanes, as they can only communicate via messages, which would not be allowed in one pool (right?).
Now there is one process, which all other processes can result in, the general "error handling".
But now matter what I try, I cannot use this activity more than once, the EA keeps crashing (version 10) or telling me I can only use sequence flows within one pool (version 11).
Can anyone help me understand which part of BPMN I did not get correctly here?
Thanks in advance

I cannot answer why Enterprise Architecture is crashing/ not supporting your modelling approach, but I can assure that referencing a global task or another process via call activities from different pools is valid BPMN 2.0.
The specification (p 183 ff/213 ff in PDF doc: Call Activities) does not mention restrictions regarding the pools from which global tasks can be referenced (it wouldn't make sense to put such a restriction on referencing something "global", either) and other modeling tools seem to support your approach as well. I just tested the case with Signavio and it works fine, the syntax checker is not throwing any error.
Another approach for solving your case might be referencing another process as Link Intermediate Events (p 183/213 in PDF doc). However, I don't know if this will be possible using the Enterprise Architect, but it might be worth a try.

Related

Best practices for a microstrategy workflow

we are a team of 5 people working with microstrategy. We share every role, but we have no worklfow.
Everybody may build or change attributes and change the schema. This leads often to reports not working. Furthermore, there is no "good" documentation. We tried to establish a documentation with sharepoint, but there we also had no workflow.
Originally, we had an old project where for every report all the attributes where constructed newly. So we did not reuse any existing schema object.
Hence, we started a new project. We realized that due to lack of understanding and lack of workflow we make and made a lot of mistakes. We feel that we understand things better slowly (parent child), but the workflow is still horrible.
We have a development project and a lice project, but with the way we are working now, we have a lot of problems. Particularly, the missing version control system is killing us. We perform changes and forget what we did. Hence, we have to use backups, destroying useful work on a given day
So what are best practices to:
* deploy new attributes, facts and reports
* ensure that old reports work after constructing new attributes and facts
* improve documentation
* attributes defined on fact tables and parent-child relationships
Any help is appreciated
MicroStrategy development in a team environment, deploying from development to live, can be very challenging. As you rightly point out, the lack of version control, and unknown interdependencies between objects can cause untold problems. There's no one right answer to this question, but I would suggest the following:
Use all the tools provided by MicroStrategy. When you're deploying from one project to another, don't just drag and drop in Object Manager, create a package. When you deploy that package, make sure you choose to create an undo package, so you can rollback changes if you encounter any problems.
On that note, try to catch these problems in advance. Running Integrity Manager before and after a deployment, even if it's just to generate SQL for the reports, will point out if you've broken anything. On that note:
Create a third environment/project. Call this test/release control, whatever you prefer. Here you can test packages created in Object Manager, to ensure that they have the desired effect, and don't break anything. In effect, this is a dry run for your deployment to live. This environment should be regularly refreshed from live (via project duplication), to make sure it doesn't get in an unexpected state (as the result of a broken Object Manager package import for example).
Over and above that, I can only offer organisational advice. It's not uncommon for one person to take responsibility for schema objects (i.e. facts, attributes, transformations) so that developers don't undo each other's changes. If you have a large project, these objects could be split into functional areas, and individuals assigned.
Documentation is always tricky, but I like to put as much as possible into the object descriptions. This has the advantage of being visible in the Web interface (via tooltips), and included in the automated project documentation, should you choose to generate that. There is obviously the change log functionality for each object, but in my experience, those logs are soon not completed by developers, as saving happens too frequently. Still, if you can get people to populate that, you'd have a head start on understanding the change in your project.
To summarise:
Use Object Manager packages to deploy changes
Test changes with Integrity Manager, to catch any issues as early as possible
Use a release control project/environment, so you're not catching issues in your production environment
Assign responsibility for schema objects to a specific person or persons where possible.

Custom WorkFlows vs Plug-ins in MS CRM

I used a lot of Plug-in code to implement business logic in CRM but now I've came up with this feature called Custom Workflow Activity.
now i wonder When to use these custom workflows over Plug-ins ?
Code Activities are custom steps which can be inserted into one or many different workflows. Kind of "plugins" but used to be inserted in workflows.
Workflows give you more feedback because they are represented visually in CRM, so non technical people can see the status of a workflow, and the steps which were executed since the start. Workflows are also executed in the Asynchronous service so they run asynchronously, plugins run synchronously, inside the application pool.
So workflows are also better for long running processes.
With that being said, plugins are still helpful when:
You need to have an immediate response, because they are triggered and executed inside CRM's application pool and,
You need to run anything inside the transaction, so they can abort it by raising an exception.
Example: you have an integration with a 3rd party service, where a record can't be created in CRM unless something is validated on the other side. Another example is concurrency: the auto-number plugin is a plugin because it needs to lock the database in the transaction, otherwise multiple concurrent threads could create duplicate IDs.
So, the answer, like always is: It depends. :)
I went deep into the subject myself and found interesting things i want to share,
So here is the complete list of compare :
Plug-in's only fire on data change like updating or creating records but custom workflows take part inside a process ( workflow, dialog, ... )
As a result , workflows not only can be triggered on data change, but on demand at anytime at any point inside their process. As you might have already understood, It is the real flexibility needed for implementing complicated business logic.
Plug-ins won't accept arguments or passed-data,
But custom workflows make it possible by using InArgument properties like below :
[Input("Case")] //label of the field shown in workflow
[ReferenceTarget("incident")] //if using EntityReference, must point the type
public InArgument<EntityReference> yourArg { get; set; } //almost every data type is supported
Workflows can be simply used and manipulated by business users.
Custom Workflows are absolutely reusable. with one register you have a piece of business logic that can be used in several situations.
in some cases you might even happen to write a code which can be used upon many different entities.
So far you know that custom workflow is more reliable than plug-in , but the point that makes a plugin's take over custom workflow is when you are validating data changes and eventually need to revert those changes . of course this is possible in Custom Workflows but it's much more easier to add a plugin than workflow.
and bare in mind that plugins run faster! (as i tested it myself)
However profiling workflows in CRM is still bugged out !
Many of the developers or MS CRM beginners get confused in some scenarios whether to go with Workflows or to go with Plugins, as both can be used and has ability to perform specific task at server side.
Plugins and workflows have some significant differences like limitations in event messages, Triggering points.
You can refer the below link for complete understanding of differences-
https://mscrm16tech.com/.../workflows-vs-plugins-in-ms-crm/

claim processing with policy variants using drools and jbpm?

I'm trying to build an claim processing system. There will be multiple variations of insurance policies (based on the negotiations with individual clients). Aim is to keep a base policies per provider and then apply variations to them per client to ensure easy maintenance of top level policies (like damage due to fire covered or not). The policies should be easy to be created by non-technical business users.
What is the best approach for this? I'm thinking on the lines of using Drools to come up with basic rules and then create jBPM processes per policy provider that will consume the rules. Guvnor for authoring and maintenance of rules and processes.
Assuming no human tasks (its going to be just a set of rules that need to be fired and results be thrown out), is using jBPM going to be an overkill? Are there better alternatives in the open source world?
Drools is already closely integrated with jBPM for use cases like this, so it definitely won't be overkill, they will work very nicely together. jBPM is not only about human interactions, it can just as well be used for automatic processing.
One remark, it might even be possible to not have one process per provider but have only one (or a small set of) process(es) and use rules to handle the variations.

Software to consolidate information flows into a company

At our company, we are looking at replacing a number of legacy systems that handle information from our customers into our company. Typical systems allow the user to drop an ftp file somewhere. This file will then be transformed by a number of programs and eventually end up in some kind of database. In total we have +30 different "systems" or applications that does this. And, it is more or less a mess.
We believe we lack a common system to manage these flows: triggered by upload or possible another event, register the data, create some sort of "job" (or process) from it, pass it through the variuos services/transformation programs it needs to go through, provide feedback to the customer, provide information about progress, etc to us, handle failures and so on. Sort of like Jenkins (/Hudson/CruiseControl/similar) but for information transformation jobs, rather than build jobs, and with a job beeing more of a "process instance" of a job, then the job itself (e.g. different data should trigger the job several times, running concurrently).
We are cabable of writing such software ourselves, but surely software as this exists(?) I have been googling around, and found that what we need ma possibly be "job scheduling" software or "business process management" software. However, these are all new domains for us, and I am quite uncertain to as what kind of software would fit our needs. It appears one could invest quite a deal of ressources into this type of software before
So, what I am looking for is pointers to what kind of software or systems that could solve the kind of needs we have. Preferably Open Source, Java based, running in a Java EE container or similar, but really, at this point, almost any pointer/hint will be welcomed :-)
Thanks in advance
P.S. I realise I may be out of scope for Stackexchange, but I have been unable to locate another forum where this kind of question might be answered, so I hope it is OK.
I know of the following products:
Redwood Cronacle (I worked with it 1994-1997 and it still runs). Purchase product. Oracle and C based. Strong in multiple server platforms. Embeddable.
Oracle E-business suite core. Purchase product. Oracle based. Strong for integration with the same ERP system. Weak for multiple server platforms.
Invantive Vision (I developed it :-). Purchase product. Oracle and Java based. Strong in integration with ETL (Pentaho open source). Weak for multiple server platforms. Embeddable.
Quartz Scheduler. Apache license. Java based. Worked with in 2004 or so. Strong focus on embedding.
Hi I don’t know if you will find that solution in open source or Java. It sounds like bespoke or custom software to me. I would advise you to search for a project management software developer with high level of IT and Data warehousing. Ask for bespoke and customized installations with a real time database. I think you will solve your problem with this.

Web Application deployment and database/runtime data management

I have decided to finally nail down my team's deployment processes, soup-to-nuts. The last remaining pain point for us is managing database and runtime data migration/management. Here are two examples, though many exist:
If releasing a new "Upload" feature, automatically create upload directory and configure permisions. In later releases, verify existence/permissions - forever, automatically.
If a value in the database (let's say an Account Status of "Signup") is no longer valid, automatically migrate data in database to proper values, given some set of business rules.
I am interested in implementing a framework that allows developers to manage and deploy these changes with the same ease that we manage and deploy our code.
So the first question is: 1. What tools/frameworks are out there that provide this capacity?
In general, this seems to be an issue in any given language and platform. In my specific case, I am deploying a .NET MVC2 application which uses Fluent NHibernate for database abstraction. I already have in my deployment process a tool which triggers NHibernate's SchemaUpdate - which is awesome.
What I have built up to address this issue in my own way, is a tool that will scan target assemblies for classes which inherit from a certain abstract class (Deployment). That abstract class exposes hooks which you can override and implement your own arbitrary deployment code - in the context of your application's codebase. the Deployment class also provides for a versioning mechanism and the tool manages the current "deployment version" of a given running app. Then, a custom NAnt task glues this together with the NAnt deployment script, triggering the hooks at the appropriate times.
This seems to work well, and does meet my goals - but here's my beef, and leads to my second question: 2. Surely what I just wrote has to already exist. If so, can you point me to it? and 3. Has anyone started down this path and have insight into problems with this approach?
Lastly, if something like this exists, but not on the .NET platform, please still let me know - as I would be more interested in porting a known solution than starting from zero on my own solution.
Thanks everyone, I really appreciate your feedback!
Each major release, have a script to create the environment with the exact requirements you need.
For minor releases, have a script that is split into the various releases and incrementally alters the environment. There are some big benefits to this
You can look at the changes to the environment over time by reading the script and matching it with release notes and change logs.
You can create a brand new environment by running the latest major and then latest minor scripts.
You can create a brand new environment of a previous version (perhaps for testing purposes) by specifying it to stop at a certain minor release.