jBPM Repositories disappear after Wildfly restart - jboss

Pardon if I can't give more pointers, but I'm really a noob at wildfly. I'm using version 9.0.2.
I have deployed jbpm-console, drools, and dashboard - no problems here. I restart wildfly using the jboss CLI, and when I login again, the repositories won't appear in the web interface or on disk (atleast nothing that grepping or find will show).
I'm using the H2 database. I'm not even sure where to look, does anyone have any idea?
Thanks in advance!

After enough reading through the docs, it would seem that it's necessary to configure jBPM to persist. From the docs:
"By default, the engine does not save runtime data persistently. This means you can use the engine completely without persistence (so not even requiring an in memory database) if necessary, for example for performance reasons, or when you would like to manage persistence yourself. It is, however, possible to configure the engine to do use persistence by configuring it to do so. This usually requires adding the necessary dependencies, configuring a datasource and creating the engine with persistence configured."
https://docs.jboss.org/jbpm/v5.3/userguide/ch.core-persistence.html

Related

Spring boot Kogito Mongodb integration

I'm working on creating a kogito bpm spring boot project. I'm very happy to see reduced level of complexity on integration on jbpm in spring boot with the help of KOGITO. I'm struggling to find answers for my question, so posting them here,
Kogito is a open source cloud offering for jbpm. I'm I correct?
I see only mongodb or infinispan can only be used or supported with Kogito. I can't integrate psql with kogito. I'm I correct?
I successfully created the spring boot kogito mongodb project and when I placed a .bpmn file in the resource folder, automatically endpoints got created. I was able to access, run the process and get response. But I don't see any entries created in MONGODB :| I don't even see the table being created. The .bpmn contains a simple hello world flow with start+scripttask+end nodes. Please explain help me understand this. Is the RuntimeMangar configured for per request stratergy? How can I change it?
Answers inline.
Kogito is a open source cloud offering for jbpm. I'm I correct?
Kogito is open-source and has jBPM integrated into its codebase to run on a cloud-native environment. In addition, a lot has been made to make it also run with native compilation if used with Quarkus.
I see only mongodb or infinispan can only be used or supported with Kogito. I can't integrate psql with kogito. I'm I correct?
To this date, Kogito has the following add-ons to support persistence:
Infinispan
Postgres
MongoDB
JDBC (so you can extend to support any database you wish)
See more about it here https://docs.jboss.org/kogito/release/latest/html_single/#con-persistence_kogito-developing-process-services.
But I don't see any entries created in MONGODB
Do you mind sharing a reproducer? Have you taken a look at the examples in https://github.com/kiegroup/kogito-examples/tree/stable/process-mongodb-persistence-springboot? This example shows a call to a sub-process that relies on a user task. Hence the process must be persisted to fire up again on a new request to solve the task. However, since your process starts and ends in one request, there's nothing to be persisted in the DB:
Runtime persistence is intended primarily for storing data that is required to resume workflow execution for a particular process instance. Persistence applies to both public and private processes that are not yet complete. Once a process completes, persistence is no longer applied. This persistence behavior means that only the information that is required to resume execution is persisted.

CDI and User Defined Functions (UDF) in Teiid / Wildfly

Recently I began working with Teiid and Wildfly. I have a user defined function (UDF) that adds custom functionality to Teiid, and it works as expected. However, I need to modify it further and would like to use CDI to inject a bean from the Wildfly app server. I know that the UDF isn't managed by the container (it is a Wildfly module with an associated module.xml file deployed as a jar), so I've added (what seemed to be) necessary dependencies to module.xml but it doesn't work.
Is it possible to use CDI in a UDF with Teiid / Wildfly, and if so, how?
No, it is not possible. although Teiid is a resident of WildFly it is using the infrastructure of WildFly for a variety of features like transactions, security, data sources, administration etc. It is not part of JEE or anything, so there is no direct way to do this. If you want to explain what you are trying to accomplish, maybe we can offer any further guidance on alternatives.

Getting started for team development

I want to start developing with a team using a Neo4j DB, a Spring Boot backend and an AngularJS frontend.
For that, I want to have a Maven Repository and a Jenkins.
To enable my team to use this, I want to have some kind of server at home that can provide remote (sequred) access to the Maven Repo, the Jenkins and the Neo4j DB and that can host the AngularJS frontend communicating with the Spring Backend.
I don't really know where to start. After some googling I found a NAS, but I'm not sure if they suit my requirements.
I've found tutorials for configuring a VPN but there may be a simpler way.
What would you recommend?
So, after some more asking around and googling if found 2 possible solutions, that i want to try out in the future:
First of seems to be the NAS (I've only read about Synology), although it not seems to be intended for my requirements. However there are packages available in the DiskStation OS that allow the installation of a Jenkins, a Maven Repo and Docker, allowing to host a Neo4j DB. I was told, I should be cautious, because only the "x86 diskstation supports docker". At this point I'm not too sure what this means, but since I'm posting an answer, I don't want to keep this knowledge for myself.
I didn't really find anything on hosting applications.
Second solution seems to be, to build a homeserver. In my current understanding, it should suffice to have a spare PC at home for that. All the steps involved should be available under here (german).
I didn't find anything about hosting applications here too, but since this is a "real" system, I'm pretty sure it's possible.
I'm going to try the second one out and keep you updated as far as I don't forget it :)

Java WebApp choosing JTA, database replicator/load-balancer or both

We have a webapp that is currently running on one instance of Apache Tomcat with one database instance, but the increase in traffic will soon (probably) force us to resort to load-balancing several webapp instances, and we've run into a problem that seems to have no easy answer.
Currently our JDBC DataSource is configured as Resource-local, rather than Transactional, and after some searching, everyone recommends to use Transactional, which requires the use of a JTA provider. No real justification is used for why I don't just stick with the current scenario where we have a servlet filter catch any unhandled exceptions and rollback the active transaction. Besides that the only one I've found that is just a JTA provider (not with 5 more JEE technologies combined) and is still maintained is Bitronix. The other alternative is to move out of Tomcat and use Glassfish, since it is a full Java EE platform, and we also use JavaMail, JPA and JAX-RS.
Only one transaction scenario uses Serializable isolation level.
As for the database, we may be looking too far ahead to think of distributed storage like Postgres-XL or pgpool, but if we make the wrong choice now it will be harder to fix later.
My questions are as follows:
Do synchronous database replication tools and JTA complete each-other, hinder each-other or just perform the same consistency checks twice?
Do we need JTA if we only have one database, but multiple webapp instances?
Do we need JTA if we have multiple database and multiple webapp instances?
Should we just switch to Glassfish or something like TomEE?
Supposedly there are ways we can keep using Hibernate as our JPA under both. It would be tedious to have to rewrite all our native queries to use positional parameters because EclipseLink and OpenJPA don't support them. That little extra feature makes Hibernate worth choosing above all other JPAs for me.

Uninstallation of application leaves leftovers such as BLA's and CU's

I came across problem of cyclic (deploy-undeploy) deployments to WebSphere 7, where uninstalled application leaves dirty workplace. IBM has a fix (PM20642)for it in cumulative updates starting from 7.0.15, but I see no difference. Orphaned folder for business level app and composition unit are still present after undeployment.I'm using JMX admin client for connectivity to the server.
Anyone has any experience in dealing with this issue?
If you're using IBM's fix and it still fails, I would say open a PMR with IBM to help you investigate. It could be their fix didn't work as they expected or maybe the fix pack was not applied correctly. In either scenario way I would say you may want IBM's support to resolve this issue.
If you only have remote access via JMX, then you could try to use $AdminConfig deleteDocument in wsadmin to remove the files/folders from the configuration repository.