Spring boot Kogito Mongodb integration - mongodb

I'm working on creating a kogito bpm spring boot project. I'm very happy to see reduced level of complexity on integration on jbpm in spring boot with the help of KOGITO. I'm struggling to find answers for my question, so posting them here,
Kogito is a open source cloud offering for jbpm. I'm I correct?
I see only mongodb or infinispan can only be used or supported with Kogito. I can't integrate psql with kogito. I'm I correct?
I successfully created the spring boot kogito mongodb project and when I placed a .bpmn file in the resource folder, automatically endpoints got created. I was able to access, run the process and get response. But I don't see any entries created in MONGODB :| I don't even see the table being created. The .bpmn contains a simple hello world flow with start+scripttask+end nodes. Please explain help me understand this. Is the RuntimeMangar configured for per request stratergy? How can I change it?

Answers inline.
Kogito is a open source cloud offering for jbpm. I'm I correct?
Kogito is open-source and has jBPM integrated into its codebase to run on a cloud-native environment. In addition, a lot has been made to make it also run with native compilation if used with Quarkus.
I see only mongodb or infinispan can only be used or supported with Kogito. I can't integrate psql with kogito. I'm I correct?
To this date, Kogito has the following add-ons to support persistence:
Infinispan
Postgres
MongoDB
JDBC (so you can extend to support any database you wish)
See more about it here https://docs.jboss.org/kogito/release/latest/html_single/#con-persistence_kogito-developing-process-services.
But I don't see any entries created in MONGODB
Do you mind sharing a reproducer? Have you taken a look at the examples in https://github.com/kiegroup/kogito-examples/tree/stable/process-mongodb-persistence-springboot? This example shows a call to a sub-process that relies on a user task. Hence the process must be persisted to fire up again on a new request to solve the task. However, since your process starts and ends in one request, there's nothing to be persisted in the DB:
Runtime persistence is intended primarily for storing data that is required to resume workflow execution for a particular process instance. Persistence applies to both public and private processes that are not yet complete. Once a process completes, persistence is no longer applied. This persistence behavior means that only the information that is required to resume execution is persisted.

Related

Spring Cloud Integration test, embedded kafka vs testcontainers

I have a Spring cloud stream application which I need to make an integration test for (to be specific using cucumber). The application communicate with other services using Kafka message broker. From what I know I could make this work using either a kafka testcontainers or using spring provided embedded kafka. But what I don't know is which one would be the best solution so are there anything that the testcontainer could do but embedded can't or the other way around? (use cases or example would be appreciate!)
p.s This integration should be able to run on ci/cd pipeline.
It is called embedded for a reason. It really can be only accessed from the process which spawned it. With Testcontainers you really can reuse existing container and have access to it from the other process. But that's probably to exotic.
I guess with properly configured Testcontainers you can reach as much as possible similarities with the prod you'd deploy your solution. The embedded Kafka might be limited in some areas, e.g. SSL configuration or so.

how to write task information in the spring data flow UI manually

I integrate spring batch into a restful controller of a spring boot, which means now we operate spring batch program by send a restful call. In this case, we cannot make a jar and register the jar on spring data flow server. So my question is that how to register a task if we don't have jar
You've asked a few similar questions today.
My recommendation is that you could consider referring to the ref. guide of Spring Cloud Task and Spring Cloud Data Flow. Specifically, pay attention to the Spring Batch section.
Once you have the understanding as to what to do, you can build a batch-job as a Spring Cloud Task application, and run it standalone successfully.
If it runs locally as expected, you can switch to SCDF and register the JAR using the REST-API, Shell, or in the GUI. You'd need a physical uber-jar of the application for it. With that registered, you can then build a Task definition with it, and launch it from SCDF.
If you want to do all of the above programmatically, please have a look at the acceptance-test suite for examples.

jBPM Repositories disappear after Wildfly restart

Pardon if I can't give more pointers, but I'm really a noob at wildfly. I'm using version 9.0.2.
I have deployed jbpm-console, drools, and dashboard - no problems here. I restart wildfly using the jboss CLI, and when I login again, the repositories won't appear in the web interface or on disk (atleast nothing that grepping or find will show).
I'm using the H2 database. I'm not even sure where to look, does anyone have any idea?
Thanks in advance!
After enough reading through the docs, it would seem that it's necessary to configure jBPM to persist. From the docs:
"By default, the engine does not save runtime data persistently. This means you can use the engine completely without persistence (so not even requiring an in memory database) if necessary, for example for performance reasons, or when you would like to manage persistence yourself. It is, however, possible to configure the engine to do use persistence by configuring it to do so. This usually requires adding the necessary dependencies, configuring a datasource and creating the engine with persistence configured."
https://docs.jboss.org/jbpm/v5.3/userguide/ch.core-persistence.html

Bluemix Liberty SQLDB

I have created an "enterprise template" Liberty server with an EAR file application requiring a few SQLDB connections. This is working and I am able to cf push this server to the Bluemix environment.
My question is how do I go about packaging the entire content and publish this to Bluemix in ONE action (i.e., they will have an instance of the same application running on Liberty with the same SQLDB table setup).
From my quick browsing of the blogs and Q&A, I have only found articles talking about creating the SQLDB ahead of time, packaging the Liberty runtime as a .zip file, and then using cf push to Bluemix. Because the SQLDB was created ahead of time, the DB connections would work.
So is there a way to package the Liberty server with the SQLDB creation as one entity into perhaps one "buildpack"? If so, can someone guide me on the steps involved? (or articles/blogs, anything would help)
You can't do it.
If you want create a script that do all operations in one time, an idea is create a simple job (in java for example) that you can launch in your script.
The job should perform these steps:
connect to sqldb - bluemix service using VCAP_SERVICES (for this
step you can see the documentation
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#SQLDB
run DDL (create table, ...) in your little job
close connection
Another option is to package a database migration helper (something like Flyway in the application. Then you can invoke it using Java, on application startup (we've had good luck with #singleton #startup EJBs for this pattern). The migration will run when needed, but leave the database alone otherwise. Another advantage of this pattern is you can use the migrations to update the tables of an existing table (as the name suggests).

Spring batch application integration with spring batch admin

I developed one spring batch application which is deployed as executable jar using batch/shell script. It works fine.
Now recently I read about spring batch admin application release. As per their doc, they say you have to point to job-context.xml and that will allow to manage spring batch app to be started,restarted and stopped from admin app. Now my question is do I have to keep my job-context.xml outside the jar or what are the exact steps, i am confused about this configuration.
Any insight on this is very useful and by the way I am using spring batch 2.1.
Thanks
The Spring Batch admin application is a good reference implementation and is highly customizable. All interface implementations may be replaced via Spring DI using your own classes. UI is also template driven(FreeMarker I think) and therefore may be customized to display relevant information, change skin etc.
I had a similar need like yours - need admin functionality included in an app built as jar. I did not quite like the fact that I had to package my jobs as a .war file. Instead I extracted relevant configurations from Spring Batch Admin source and created a deployment that works off file system and runs on embedded Jetty server.
See screen shots here : https://github.com/regunathb/Trooper/wiki/Trooper-Batch-Web-Console
Source, configurations etc are available here : https://github.com/regunathb/Trooper/tree/master/batch-core . This project actually creates a .jar and not .war
If your application has custom classes and is deployed as a runnable jar and not contained within the spring batch admin, you cannot start jobs. You can only view the status of jobs and "kill" their status in the database.
If you look at http://static.springsource.org/spring-batch-admin/reference/reference.xhtml at the end of the Configuration Upload section it states
You can see a new entry in the job registry ("test-job") which is
launchable in-process because the application has a reference to the
Job. (Jobs which are not launchable were executed out of process, but
used the same database for its JobRepository, so they show up with
their executions in the UI.)
If your jobs are strictly configurable jobs, as-in you use only XML to define them and do not need to do any customized item readers/processors/writers or other custom classes, then you can upload the job XML and it will be runnable from within the admin site. If you have custom classes then, from my experience, you will have to have the spring batch application deployed within your web application and then upload an XML that contains the jobs you want to run separately.
I personally just used the Admin tool to view job status and provide me with statistics through some custom pages. I left the scheduler to run the jobs and I didn't want those with access to the admin site to kick off a job when they knew nothing about it. Basically, used it to give the users a warm fuzzy without allowing them to muck it up. (leave it to a user to find an edge case you didn't account for)