We are planning to use BRMS 5.3.1 in our projects and one use case popped up yesterday where business wanted to store what rules evaulated to TRUE and were eventually fired. this is so that these information can be used for analysis purposes at a later point. Does Drools provide an API(s) that could provide this information at runtime? If it does, what would be the performance impact of having such a feature enabled on production systems?
Appreciate your answers on this.
yeah you can add one of the AgendaListeners to the session to get which rules were activated and fired. The performance impact will depend on what you do inside that listener, but if you implement an async way (sending a jms message for example) to store the information provided by the listener everything will be good.
HTH
Related
I used a lot of Plug-in code to implement business logic in CRM but now I've came up with this feature called Custom Workflow Activity.
now i wonder When to use these custom workflows over Plug-ins ?
Code Activities are custom steps which can be inserted into one or many different workflows. Kind of "plugins" but used to be inserted in workflows.
Workflows give you more feedback because they are represented visually in CRM, so non technical people can see the status of a workflow, and the steps which were executed since the start. Workflows are also executed in the Asynchronous service so they run asynchronously, plugins run synchronously, inside the application pool.
So workflows are also better for long running processes.
With that being said, plugins are still helpful when:
You need to have an immediate response, because they are triggered and executed inside CRM's application pool and,
You need to run anything inside the transaction, so they can abort it by raising an exception.
Example: you have an integration with a 3rd party service, where a record can't be created in CRM unless something is validated on the other side. Another example is concurrency: the auto-number plugin is a plugin because it needs to lock the database in the transaction, otherwise multiple concurrent threads could create duplicate IDs.
So, the answer, like always is: It depends. :)
I went deep into the subject myself and found interesting things i want to share,
So here is the complete list of compare :
Plug-in's only fire on data change like updating or creating records but custom workflows take part inside a process ( workflow, dialog, ... )
As a result , workflows not only can be triggered on data change, but on demand at anytime at any point inside their process. As you might have already understood, It is the real flexibility needed for implementing complicated business logic.
Plug-ins won't accept arguments or passed-data,
But custom workflows make it possible by using InArgument properties like below :
[Input("Case")] //label of the field shown in workflow
[ReferenceTarget("incident")] //if using EntityReference, must point the type
public InArgument<EntityReference> yourArg { get; set; } //almost every data type is supported
Workflows can be simply used and manipulated by business users.
Custom Workflows are absolutely reusable. with one register you have a piece of business logic that can be used in several situations.
in some cases you might even happen to write a code which can be used upon many different entities.
So far you know that custom workflow is more reliable than plug-in , but the point that makes a plugin's take over custom workflow is when you are validating data changes and eventually need to revert those changes . of course this is possible in Custom Workflows but it's much more easier to add a plugin than workflow.
and bare in mind that plugins run faster! (as i tested it myself)
However profiling workflows in CRM is still bugged out !
Many of the developers or MS CRM beginners get confused in some scenarios whether to go with Workflows or to go with Plugins, as both can be used and has ability to perform specific task at server side.
Plugins and workflows have some significant differences like limitations in event messages, Triggering points.
You can refer the below link for complete understanding of differences-
https://mscrm16tech.com/.../workflows-vs-plugins-in-ms-crm/
The Reliable Services Overview topic has a section, at the bottom, called When to use Reliable Services APIs. In there, one list item says:
Your application needs to maintain change history for its units of state*.
The star at the end is explained just a little bit further down:
* Features available at SDK general availability.
The SDK has reached general availability by now, but I cannot find any information about how to make use of the "Maintain change history for its units of state"-feature or even a suggestion of what that actually means.
I'm asking here, in hope that someone can shed any light on this. I'm interested in knowing whether this feature is indeed available, or if not when it is supposed to be available, or if it has been abandoned.
Some insights on the intended design and functionality of this feature would also be much appreciated.
It describes the scenarios in which you might use the reliable collections. It can be used to keep multiple versions of an object and build a history of changes. (like delta's or snapshots)
This way you can create an event source. (it would require some coding on your end though)
Unit of state: entry in the State manager
I'm admittedly unsure whether this post falls within the scope of acceptable SO questions. If not, please advise whether I might be able to adjust it to fit or if perhaps there might be a more appropriate site for it.
I'm a WinForms guy, but I've got a new project where I'm going to be making web service calls for a Point of Sale system. I've read about how CRUD operations are handled in RESTful environments where GET/PUT/POST/etc represent their respective CRUD counterpart. However I've just started working on a project where I need to submit my requirements to a developer who'll be developing a web api for me to use but he tells me that this isn't how the big boys do it.
Instead of making web requests to create a transaction followed by requests to add items to the transaction in the object based approach I'm accustomed to, I will instead use a service based approach to just make a 'prepare' checkout call in order to see the subtotal, tax, total, etc. for the transaction with the items I currently have on it. Then when I'm ready to actually process the transaction I'll make a call to 'complete' checkout.
I quoted a couple words above because I'm curious whether these are common terms that everyone uses or just ones that he happened to choose to explain the process to me. And my question is, where might I go to get up to speed on the way the 'big boys' like Google and Amazon design their APIs? I'm not the one implementing the API, but there seems to be a little bit of an impedance mismatch in regard to how I'm trying to communicate what I need and the way the developer is expecting to hear my requirements.
Not sure wrt the specifics of your application though your general understanding seems ik. There are always corner cases that test the born though.
I would heed that you listen to your dev team on how things should be imolemented and just provide the "what's" (requirements). They should be trusted to know best practice and your company's own interpretation and standards (right or wrong). If they don't give you your requirement (ease-of-use or can't be easily reusable with expanded requirements) then you can review why with an architect or dev mgr.
However, if you are interested and want to debate and perhaps understand, check out Atlassian's best practice here: https://developer.atlassian.com/plugins/servlet/mobile#content/view/4915226.
FYI: Atlassian make really leading dev tools in use in v.large companies. Note also that this best-practices is as a part of refactoring meaning they've been through the mill and know what worked and what hasn't).
FYI2 (edit): Reading between the lines of your question, I think your dev is basically instructing you specifically on how transactions are managed within ReST. That is, you don't typically begin, add, end. Instead, everything that is transactional is rolled within a transaction wrapper and POSTed to the server as a single transaction.
I'm trying to build an claim processing system. There will be multiple variations of insurance policies (based on the negotiations with individual clients). Aim is to keep a base policies per provider and then apply variations to them per client to ensure easy maintenance of top level policies (like damage due to fire covered or not). The policies should be easy to be created by non-technical business users.
What is the best approach for this? I'm thinking on the lines of using Drools to come up with basic rules and then create jBPM processes per policy provider that will consume the rules. Guvnor for authoring and maintenance of rules and processes.
Assuming no human tasks (its going to be just a set of rules that need to be fired and results be thrown out), is using jBPM going to be an overkill? Are there better alternatives in the open source world?
Drools is already closely integrated with jBPM for use cases like this, so it definitely won't be overkill, they will work very nicely together. jBPM is not only about human interactions, it can just as well be used for automatic processing.
One remark, it might even be possible to not have one process per provider but have only one (or a small set of) process(es) and use rules to handle the variations.
I'm looking to convert a relatively new web-based application with a clear domain model over to more of a CQRS style system. My new application is essentially an enhanced replacement of an older existing system.
The existing systems in my organization share a set of common databases, which are updated by an untold number of applications (developed via the Chaos Method) that exist in silos throughout the company. (As it stands, I believe that no single person in the company can identify them all.)
My question is therefore about the read model(s) for my application. Since various status changes, general user data, etc. are updated by other applications outside my control, what's the best way to handle building the read models in such a way that I can deal with outside updates, but still keep things relatively simple?
I've considered the following so far:
Create Views in the database for read models, that read all tables, legacy and new
Add triggers to existing tables to update new read model tables
Add some code to the database (CLR Stored proc/etc [sql server]) to update an outside datastore for read models
Abandon hope
What is the general consensus on how to approach this? Is it folly to think I can bring order to a legacy system without fully rewriting everything from scratch?
I've used option #1 with success. Creating views to demoralize the data to create a read model is a viable option depending on the complexity of the write database(s). Meaning, if it is relatively straight forward joins that most developers can understand then I would take a closer look to see if it's viable for you. I would be careful with having too much complexity in these views.
Another thing to consider is periodic polling to build and update similar to a traditional reporting databases. Although not optimal in comparison to a notification, depending on how stale your read model can be, this might also be an option to look at.
I once was in a similar situation, the following steps was how i did it:
To improve the legacy system and achieve cleaner code base, the key is to take over the write responsibility. But don't be too ambitious as this may introduce interface/contract changing which makes the final deployment risky.
If all the write are fired through anything except direct sql updates, keep them backward compatible as possible as you can. Take them as adapters/clients of your newly developed command handlers.
Some of the write are direct sql updates but out of your control
Ask the team in charge if they can change to your new interface/contract?
If no, see step 3.
Ask if they can tolerate eventual consistency and are willing to replace the sql updates with database procedures?
If yes, put all the sql updates in the procedures and schedule a deployment and see step 4.
If no, maybe you should include them in your refactoring.
Modify the procedure, replace the sql updates with inserting events. And develop a backend job to roll the events and publish them. Make your new application subscribing these events to fire commands to your command handlers.
Emitting events from your command handlers and use them to update the tables that other applications used to consume.
Move to the next part of the legacy system.
If we had an infinitely powerful server, we wouldn't bother with view models and would instead just read from the basic entities tables. View models are meant to improve performance by preparing and maintaining an appropriate dataset for display. If you use a database View as a view model, you've really not gotten much of a performance gain over an adhoc query (if you ignore the preplanning that the sql parser can do for a view).
I've had success with a solution that's less intrusive than #Hippoom's solution, but more responsive than #Derek's. If you have access to the legacy system and can make minor code changes, you can add an asynchronous queue write to generate an event in a queueing system (RabbitMQ, Kafka, or whatever) in the legacy system repositories or wherever data is persisted. Making these writes asynch should not introduce any significant performance costs, and should the queue write fail it will not affect the legacy system. This change is also fairly easy to get through QA.
Then write an event driven piece that updates your read models. During the legacy system update phase (which can take a while), or if you only have access to some of the legacy systems that write to these databases, you can have a small utility that puts a new "UpdateViewModel" event in the queue every couple minutes. Then you would get timely events when the legacy systems save something significant, but are also covered for the systems that you are not able to update.