Running a spring batch with partitions in cloud foundry - spring-batch

I have created an app with spring batch(with partition) application taking example of this https://github.com/mminella/S3JDBC. My app is reading some files from object store and doing some processing and writing back to object the store. My app with local partition works fine in my machine.
I changed the maven, to run in cloud foundry , did change for deployer partition handler and step execution listener and deploying on pcf.
But while trying to push and run the app on pcf , I am getting an issue :
Failing URI /v2/info. I tried to log the error found that there is one call to my app e.g https://mypcf.com:443/v2/info and after that it gives the error. I cant provide full logs because of some restrictions. So I want to know :
To deploy a spring batch in pcf(is there any extra configuration
needed except the maven dependency and code changes for
deployerpartitionhandler and stepexecutionlistener and #cloudtask):
org.springframework.cloud spring-cloud-deployer-cloudfoundry
1.1.0.M1
Is it mandatory to have a separate data base service like my-sql for the partition job. Cant I use H2(the default one, if I
don't configure anything)?
Do I need to do any configuration in pcf to support running multiple partitions ?
As I am running remote partitioning , can I run that app on local STS or Intellij(not on PCF-DEV)so that it will run my app in
pcf(remote) and launch the workers.(Sorry for the stupid question ,
I am new to PCF).

Thanks for checking out my example. To answer your questions:
You should be able to use the latest deployer release (instead of that rather old version).
Yes. Partitioned steps need to all be able to share the same job repository data store so an in memory database like H2 will not work for that use case.
Besides defining your datasource, that's all that is required to live in PCF. That being said, there are other things that need to be configured, but you can use other mechanisms to do so (Spring Cloud Config Server, application.properties/yml, etc).
Yes, you should be able to run the master locally and have it deploy the workers onto PCF if you're using the CF deployer.

Related

How to use static spring cloud stream url for launching spring cloud tasks?

Platform used : Kubernetes.
I have an issue with Spring cloud stream url. I am launching my spring cloud tasks using spring cloud stream. Streams are deployed in kubernetes platform. Stream contains http-kafka as source and taskLauncerKafka as sink. I used http-kafka kubernetes service url to launch tasks. Kubernetes service url changes after each deployment which causes problem.The changes in the service name after each stream deployment is difficult to manage. I have tried enabling loadbalacer also. In that case also external ip-address changed after each stream roll-out.
I am using skipper for managing the deployments. Every time stream is deployed stream version changes which also changes stream url.
In my case , I have multiple instances from where I can launch spring-cloud task. If the stream url changes I need to make changes in the configmap of the deployment project for all instance and need to redeploy all instance.
Any solution ? I am thinking of centralised configuration management using spring-cloud-config server or zookeeper. In this case also I need to update the url. I can avoid deploying multiple instances using centralised configuration management.
Skipper server version : 2.4.1.RELEASE
Dataflow server version : 2.5.1.RELEASE
Which version of SCDF/Skipper you are running?
This looks similar to the issue https://github.com/spring-cloud/spring-cloud-skipper/issues/953 which was subsequently addressed in Skipper 2.6.0.

Programmatically create Artemis cluster on remote server

Is it possible to programmatically create/update a cluster on a remote Artemis server?
I will have lots of docker instances and would rather configure on the fly than have to set in XML files if possible.
Ideally on app launch I'd like to check if a cluster has been set up and if not create one.
This would probably involve getting the current server configuration and updating it with the cluster details.
I see it's possible to create a Configuration.
However, I'm not sure how to get the remote server configuration, if it's at all possible.
Configuration config = new ConfigurationImpl();
ClusterConnectionConfiguration ccc = new ClusterConnectionConfiguration();
ccc.setAddress("231.7.7.7");
config.addClusterConfiguration(ccc);
// need a way to get and update the current server configuration
ActiveMQServer.getConfiguration();
Any advice would be appreciated.
If it is possible, is this a good approach to take to configure on the fly?
Thanks
The org.apache.activemq.artemis.core.config.impl.ConfigurationImpl object can be used to programmatically configure the broker. The broker test-suite uses this object to configure broker instances. However, this object is not available in any remote sense.
Once the broker is started there is a rich management API you can use to add things like security settings, address settings, diverts, bridges, addresses, queues, etc. However, the changes made by most (although not all) of these operations are volatile which means many of them would need to be performed every time the broker started. Furthermore, there are no management methods to add cluster connections.
You might consider using a tool like Ansible to manage the configuration or even roll your own solution with a templating engine like FreeMarker to customize the XML and then distribute it to your Docker instances using some other technology.

Spring Cloud Data Flow: versioned streams

I'm implementing a stream pipe with Spring Cloud Data Flow.
My problem is that I configured MANUALLY the pipe (e.g. http | log_sink) in the server and it will be lost if I reset that server (think in an Amazon EC2 instance that can be hard reseted).
Which is the suggested way to keep versioning of pipes using SCDF?
Thanks.
I am summarizing the discussion from the comments.
To automate the promotion of Stream/Task workloads from lower to higher-level environments, the recommended approach would be the use of SCDF's Java DSL. With this, users can programmatically register, create, deploy, or launch stream/task in a repeatable manner and across many different platforms simultaneously (if there's a need for it). The Boot App built with the Java DSL can be versioned in Git, and it can be CD/GitOps friendly. With sufficient generalization to this App, it can also be reused by many different teams by overriding the defaults.
We put this for use in the product proper for or IT and Acceptance tests, which run on every upstream commit daily across multiple Kubernetes and Cloud Foundry installations.
Alternatively, all of the register, create, deploy, or launch stream/task commands can also be dumped in a text or a property file. Once when you have the file, the dataflow:>script --file command can help slurp in all the commands in each of the new environments — see docs.

Azure Service Fabric - one app different named instances

I have Azure Service Fabric Application which consumes RabbitMQ queue and makes some calculations using data from sql database.
Connection strings for rabbit and sql stored in ApplicationManifest.xml via Parameters and then changing by different publishing profiles (I have different xml for cloud or local deployment)
Now I want deploy another instance of my application for another db/rabbitmq.
I suppose I must create another publishing profile, change config package version (e.g 1.1.0) and register new application type to cluster. But I mustn't upgrade existing app. Then I should create another app with version 1.1.0.
So there will be two apps in my cluster
App for db2/rabbit2 ver 1.1.0
App for db1/rabbit1 ver 1.0.0
Is it appropriate scenario for having 2 apps with different connection strings?
One approach would be to only have one Application Type and then instantiate multiple Application instances of that type; each of those applications can consume a different db/rabbitmq. During application creation, you can pass different connection strings (db/rabbitmq) as parameters.

Spring cloud data flow deployment

I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.