Drools - rule from string gets created in-memory - drools

Hello fellow droolers !!
I am experimenting with the api mentioned referenced below to load rules from a string.
https://stackoverflow.com/questions/42927331/how-to-load-rules-from-a-string-in-drools-6-5
However, i observed that it gets created in-memory, and is not written to a file on my hard disk.
Is there an api to get such a behaviour ?
Cheers!

Drools API solves different problems than creating resources on OS file system.
KIE virtual file system KieFileSystem to build the project
drools documentation
KieFileSystem is an in-memory file system provided by the framework
baeldung tutorials

Related

Spring boot Kogito Mongodb integration

I'm working on creating a kogito bpm spring boot project. I'm very happy to see reduced level of complexity on integration on jbpm in spring boot with the help of KOGITO. I'm struggling to find answers for my question, so posting them here,
Kogito is a open source cloud offering for jbpm. I'm I correct?
I see only mongodb or infinispan can only be used or supported with Kogito. I can't integrate psql with kogito. I'm I correct?
I successfully created the spring boot kogito mongodb project and when I placed a .bpmn file in the resource folder, automatically endpoints got created. I was able to access, run the process and get response. But I don't see any entries created in MONGODB :| I don't even see the table being created. The .bpmn contains a simple hello world flow with start+scripttask+end nodes. Please explain help me understand this. Is the RuntimeMangar configured for per request stratergy? How can I change it?
Answers inline.
Kogito is a open source cloud offering for jbpm. I'm I correct?
Kogito is open-source and has jBPM integrated into its codebase to run on a cloud-native environment. In addition, a lot has been made to make it also run with native compilation if used with Quarkus.
I see only mongodb or infinispan can only be used or supported with Kogito. I can't integrate psql with kogito. I'm I correct?
To this date, Kogito has the following add-ons to support persistence:
Infinispan
Postgres
MongoDB
JDBC (so you can extend to support any database you wish)
See more about it here https://docs.jboss.org/kogito/release/latest/html_single/#con-persistence_kogito-developing-process-services.
But I don't see any entries created in MONGODB
Do you mind sharing a reproducer? Have you taken a look at the examples in https://github.com/kiegroup/kogito-examples/tree/stable/process-mongodb-persistence-springboot? This example shows a call to a sub-process that relies on a user task. Hence the process must be persisted to fire up again on a new request to solve the task. However, since your process starts and ends in one request, there's nothing to be persisted in the DB:
Runtime persistence is intended primarily for storing data that is required to resume workflow execution for a particular process instance. Persistence applies to both public and private processes that are not yet complete. Once a process completes, persistence is no longer applied. This persistence behavior means that only the information that is required to resume execution is persisted.

How can I make Service Fabric package sizes practical?

I'm working on a Service Fabric application that is deployed to Azure. It currently consists of only 5 stateless services. The zipped archive weighs in at ~200MB, which is already becoming problematic.
By inspecting the contents of the archive, I can see the primary problem is that many files are required by all services. An exact duplicate of those files is therefore present in each service's folder. However, the zip compression format does not do anything clever with respect to duplicate files within the archive.
As an experiment, I wrote a little script to find all duplicate files in the deployment and delete all but one of each files. Then I tried zipping the results and it comes in at a much more practical 38MB.
I also noticed that system libraries are bundled, including:
System.Private.CoreLib.dll (12MB)
System.Private.Xml.dll (8MB)
coreclr.dll (5MB)
These are all big files, so I'd be interested to know if there was a way for me to only bundle them once. I've tried removing them altogether but then Service Fabric fails to start the application.
Can anyone offer any advice as to how I can drastically reduce my deployment package size?
NOTE: I've already read the docs on compressing packages, but I am very confused as to why their compression method would help. Indeed, I tried it and it didn't. All they do is zip each subfolder inside the primary zip, but there is no de-duplication of files involved.
There is a way to reduce the size of the package but I would say it isn't a good way or the way things should be done but still I think it can be of use in some cases.
Please note: This approach requires target machines to have all prerequisites installed (including .NET Core Runtime etc.)
When building .NET Core app there are two deployment models: self-contained and framework-dependent.
In the self-contained mode all required framework binaries are published with the application binaries while in the framework-dependent only application binaries are published.
By default if the project has runtime specified: <RuntimeIdentifier>win7-x64</RuntimeIdentifier> in .csproj then publish operation is self-contained - that is why all of your services do copy all the things.
In order to turn this off you can simply add SelfContained=false property to every service project you have.
Here is an example of new .NET Core stateless service project:
<PropertyGroup>
<TargetFramework>netcoreapp2.2</TargetFramework>
<AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel>
<IsServiceFabricServiceProject>True</IsServiceFabricServiceProject>
<ServerGarbageCollection>True</ServerGarbageCollection>
<RuntimeIdentifier>win7-x64</RuntimeIdentifier>
<TargetLatestRuntimePatch>False</TargetLatestRuntimePatch>
<SelfContained>false</SelfContained>
</PropertyGroup>
I did a small test and created new Service Fabric application with five services. The uncompressed package size in Debug was around ~500 MB. After I have modified all the projects the package size dropped to ~30MB.
The application deployed worked well on the Local Cluster so it demonstrates that this concept is a working way to reduce package size.
In the end I will highlight the warning one more time:
Please note: This approach requires target machines to have all prerequisites installed (including .NET Core Runtime etc.)
You usually don't want to know which node runs which service and you want to deploy service versions independently of each other, so sharing binaries between otherwise independent services creates a very unnatural run-time dependency. I'd advise against that, except for platform binaries like AspNet and DotNet of course.
However, did you read about creating differential packages? https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade-advanced#upgrade-with-a-diff-package that would reduce the size of upgrade packages after the initial 200MB hit.
Here's another option:
https://devblogs.microsoft.com/dotnet/app-trimming-in-net-5/
<SelfContained>True</SelfContained>
<PublishTrimmed>True</PublishTrimmed>
From a quick test just now, trimming one app reduced the package size from ~110m MB to ~70MB (compared to ~25MB for selfcontained=false).
The trimming process took several minutes for a single application though, and the project I work on has 10-20 apps per Service Fabric project. Also I suspect that this process isn't safe when you have a heavy reliance on dependency injection model in your code.
For debug builds we use SelfContained=False though because developers will have the required runtimes on their machines. Not for release deployments though.
As a final note, since the OP mentioned file upload being a particular bottleneck:
A large proportion of the deployment time is just zipping and uploading the package
I noticed recently that we were using the deprecated Publish Build Artifacts task when uploading artifacts during our build pipeline. It was taking 20 minutes to upload 2GB of files. I switched over the suggested Publish Pipeline Artifact task and it took our publish step down to 10-20 seconds. From what I can tell, it's using all kinds of tricks under the hood for this newer task to speed up uploads (and downloads) including file deduplication. I suspect that zipping up build artifacts yourself at that point would actually hurt your upload times.

jBPM Repositories disappear after Wildfly restart

Pardon if I can't give more pointers, but I'm really a noob at wildfly. I'm using version 9.0.2.
I have deployed jbpm-console, drools, and dashboard - no problems here. I restart wildfly using the jboss CLI, and when I login again, the repositories won't appear in the web interface or on disk (atleast nothing that grepping or find will show).
I'm using the H2 database. I'm not even sure where to look, does anyone have any idea?
Thanks in advance!
After enough reading through the docs, it would seem that it's necessary to configure jBPM to persist. From the docs:
"By default, the engine does not save runtime data persistently. This means you can use the engine completely without persistence (so not even requiring an in memory database) if necessary, for example for performance reasons, or when you would like to manage persistence yourself. It is, however, possible to configure the engine to do use persistence by configuring it to do so. This usually requires adding the necessary dependencies, configuring a datasource and creating the engine with persistence configured."
https://docs.jboss.org/jbpm/v5.3/userguide/ch.core-persistence.html

how to solve SOAP services issue with portable class library?

i have an issue using SOAP services from portable class library.
once i add the service refrence configuration file created empty and calling any operation form Windows Phone or Windows store app project returns null value.
However if i added the refrence to WP or W8 project directly configuration file is npot empty and operations return data
any reason for that ?
It is the same as in full .NET. If you call a web service, the configuration file is searched for in the calling assembly, that is in your case the WP8 project.
You have two options. One is to copy the relevant configuration from app.config in the PCL to the WP8 porject or to create the web service configuration completely in code in your PCL so no config file anywhere is needed.

JBPM Workflow patch generation

I have been using JBPM workflow in my project and I have a small question regarding generating the database patches or SQL statements to apply JBPM workflow modifications.
Currently JBPM workflow provides a way to refresh the JBPM tables in schema with the deployment of the latest process definitions. However what if my system is already live with process definition deployed with state X and now I have modified the process definition file to accommodate change X2. I still need to be able to deploy the delta changes without disrupting the instances of old saved data.
Is it possible to generate only "delta" database scripts for the JBPM process definition modification? And what are other good tools which can be used to modify process definitions more intuitively?
To reiterate on my problem, JBPM deploy cleans the JBPM tables of old instances maintained there and then redeploys the latest files; how do I generate the delta without deleting old data? Are there any user friendly tools for that?
Any help in this regard will be appreciated.
I'm not sure to have understood correctly your issue. JBpm doesn't clean tables for old process instances when you deploy a new process definition.
When you deploy a new process definition with the same name of an existing one, you get new version of that process definition.
Existing process instances continue running with the process definition version they was started with, while new process instances take the latest version unless you specify the precise version to be used.
In theory, process definition can also be modified for running process instances using the API. In doing so, you must pay attention to make these changes compatible with the flowing of these instances.