JBPM JobExecutor not working on PCF but working in local - drools

We implemented jobExecutors in jbpm, running jbpm engine as springboot kie server,
Jobexecutors are running in local and updating RequestInfo table accordingly, but when we deploy in PCF scheduled jobs are not running
We enabled properties jbpm.executor.enabled true
And other props
Please advise what could be the issue in PCF

Related

Running batch jobs in Pivotal Cloud foundary

We have a requirement to migrate mainframe batch jobs to PCF cloud but as 3R's of security Rotate, Repave and Repair it might be possible that in the instance where batch job is running as spring batch that instance can be repaved/repair and our running jobs got terminated. In that scenario how to ensure that during repavement/repair of an PCF instance our jobs will not get impacted. We are looking for best way to migrate jobs in PCF cloud, any help/suggestion will be really helpful.

How to use static spring cloud stream url for launching spring cloud tasks?

Platform used : Kubernetes.
I have an issue with Spring cloud stream url. I am launching my spring cloud tasks using spring cloud stream. Streams are deployed in kubernetes platform. Stream contains http-kafka as source and taskLauncerKafka as sink. I used http-kafka kubernetes service url to launch tasks. Kubernetes service url changes after each deployment which causes problem.The changes in the service name after each stream deployment is difficult to manage. I have tried enabling loadbalacer also. In that case also external ip-address changed after each stream roll-out.
I am using skipper for managing the deployments. Every time stream is deployed stream version changes which also changes stream url.
In my case , I have multiple instances from where I can launch spring-cloud task. If the stream url changes I need to make changes in the configmap of the deployment project for all instance and need to redeploy all instance.
Any solution ? I am thinking of centralised configuration management using spring-cloud-config server or zookeeper. In this case also I need to update the url. I can avoid deploying multiple instances using centralised configuration management.
Skipper server version : 2.4.1.RELEASE
Dataflow server version : 2.5.1.RELEASE
Which version of SCDF/Skipper you are running?
This looks similar to the issue https://github.com/spring-cloud/spring-cloud-skipper/issues/953 which was subsequently addressed in Skipper 2.6.0.

How to set scheduler for Spring Batch jobs in Spring Cloud Data Flow?

I’m setting up a new Spring Batch Jobs and want to deploy it using SCDF. However, I have found that SCDF does not support scheduler feature in local framework.
I have 3 questions to ask you:
Can someone explain how scheduler of SCDF work?
Are there any ways to schedule 1 job using SCDF?
Can I use my local server as a Cloud Foundry? and how?
Yes, Spring Cloud Data Flow does not support scheduling on local platform. Please note that the local SCDF server is for development purposes only and by design, the scheduling support is intended to be relying on the platform. Hence, SCDF scheduling feature is supported on Cloud Foundry and Kubernetes using the CF and K8s schedulers.
1) Can s/o explain how scheduler of SCDF work?
sure, Similar to how the deployer is used for launching task/deploying the stream, there is an SPI for scheduling the tasks under spring-cloud-deployer project. The underlying scheduler implementations can implement this. Currently, we have CF and K8s scheduler implementations in spring-cloud-deployer-cloudfoundry and spring-cloud-deployer-kubernetes.
As a user, you can configure a scheduler for a task (batch) application (via SCDF Dashboard, shell etc.,). You can specify a cron expression to schedule the task. Once configured, the SCDF delegates the schedule request to the platform scheduler using the above-mentioned scheduler implementations. Once scheduled, it is the platform (PCF scheduler on CF, K8s scheduler on K8s) that takes care of the task using the schedule.
2) Are there any ways to schedule 1 job using SCDF?
Yes, based on the answer from 1
3) Can I use my local server as a cloud Foundry? and How?
To run SCDF on local pointing to the CF instance, you can set the necessary CF deployer properties and start the SCDF server instance. It is similar to how you configure multi platforms in SCDF server. You can find more documentation on this here.

Spring cloud data flow deployment

I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.

Using logstash, config server and eureka with spring cloud task and dataflow

We have an existing microservice environment with logstash, config and eureka servers. We are now setting up a Spring Cloud Dataflow (Kubernetes) environment (primarily intially to run tasks/batch jobs).
Ideally we would like the tasks to use the existing logstash, config and eureka servers via the standard spring boot configuration (annotations etc) to support the following scenarios:
Logstash: When a task runs its logs are output to logstash and viewable from Kibana
Config Server: To support changing configuration properties for tasks. eg a periodic task's configuration can be tweaked by altering the values on the configuration server and next time the task runs it will use the new values.
My understanding is that config server properties will override properties in the task definition which override properties in the internal application.properties.
Eureka: Each task would register itself in Eureka. The main reason for this is that our tasks have web actuator endpoints exposed and we can then can use Spring Boot Admin (which can discover services via eureka) to access the actuator endpoints and information while a task is running.
(Some of our tasks can take hours to run and this would enable us to monitor them, adjust logging etc)
Is this a sensible approach - or are there any potential issues to look out for here (eg short lived tasks with eureka). I can’t find any discussion of this in the existing spring cloud data flow or spring cloud task documentation.
You may try logstash-logback-encoder for SCDF integration with ELK stack. It works fine for our SCDF on Yarn stream application.
Config Server should work for any Spring Boot application.