I'm running Spring XD as single-node for my Sandbox environment with a MySQL DB for the batch tables. If I kill -15 the Spring XD process, then all the current definitions for my jobs and streams are lost (in the case of the jobs, the XD_JOB_REGISTRY is apparently deleted). Consequently, if I start up Spring XD again, I have lost all the previous jobs and streams definitions.
I would like to know whether this is intentional in Spring XD, or maybe due to the fact that I run in single-node mode? Or is it a bug?
EDITED TO ADD THE GIST OF SERVERS.YML:
https://gist.github.com/emedina/486b52f11bc146203534
The job and stream definitions are stored in Zookeeper while the stats for any executed jobs are stored in the database. The single-node server uses an embedded Zookeeper instance by default and that's my guess why your definitions are gone when restarting. Try setting up a separate Zookeeper instance with a permanent data location.
Related
I am building a data pipeline for batch processing. And I find that Spring Cloud Data Flow is a quite attractive framework to use. Without much knowledge in SCDF and Kubernetes, I am not sure whether it is possible to conditionally launch a Spring Cloud Task on a specific machine.
Suppose I have two physical servers that are for running the batch process (Server A and Server B). By default, I would like my Spring cloud task to be launched on Server A. If the Server A is shut down, the task should be deployed on server B. Can Kubernetes / SCDF handle this kind of mechanism? I am wondering whether the nodeselector is the thing that I should look into.
Yes, you can pass deployment.nodeSelector as a deployment property when launching the task.
The deployment.nodeSelector is a Kubernetes deployment property and hence, you need to pass something like this:
task launch mytask --properties "deployer.<taskAppName>.kubernetes.deployment.nodeSelector=foo1:bar1,foo2:bar2"
You can check the list of supported Kubernetes deployer properties here
Anyone here have experience about batch processing (e.g. spring batch) on kubernetes ? Is it good idea ? How to prevent batch processing process same data if we use kubernetes auto scaling feature ? Thank you.
Anyone here have experience about batch processing (e.g. spring batch) on kubernetes ? Is it good idea ?
For Spring Batch, we (the Spring Batch team) do have some experience on the matter which we share in the following talks:
Cloud Native Batch Processing on Kubernetes, by Michael Minella
Spring Batch on Kubernetes, by me.
Running batch jobs on kubernetes can be tricky:
pods may be re-scheduled by k8s on different nodes in the middle of processing
cron jobs might be triggered twice
etc
This requires additional non-trivial work on the developer's side to make sure the batch application is fault-tolerant (resilient to node failure, pod re-scheduling, etc) and safe against duplicate job execution in a clustered environment.
Spring Batch takes care of this additional work for you and can be a good choice to run batch workloads on k8s for several reasons:
Cost efficiency: Spring Batch jobs maintain their state in an external database, which makes it possible to restart them from the last save point in case of job/node failure or pod re-scheduling
Robustness: Safe against duplicate job executions thanks to a centralized job repository
Fault-tolerance: Retry/Skip failed items in case of transient errors like a call to a web service that might be temporarily down or being re-scheduled in a cloud environment
I wrote a blog post in which I explain all these aspects in details with code examples. You can find it here: Spring Batch on Kubernetes: Efficient batch processing at scale
How to prevent batch processing process same data if we use kubernetes auto scaling feature ?
Making each job process a different data set is the way to go (a job per file for example). But there are different patterns that you might be interested in, see Job Patterns from k8s docs.
Is there any redis jobStore able to support a quartz cluster?
Have anybody been able to build that?
By other side, what's exactly a quartz cluster? I mean, is it able to have two services running the same quartz.properties file pointing to a redis?
EDIT
I've tried with this redis job store but it seems doesn't supprt quartz clustering:
JobStore class 'net.joelinn.quartz.jobstore.RedisJobStore' props could not be configured. [See nested exception: java.lang.NoSuchMethodException: No setter for property 'isClustered']
quartz.properties:
org.quartz.scheduler.instanceName=office-scheduler-service
org.quartz.scheduler.instanceId=AUTO
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval=20000
# thread-pool
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount=2
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread=true
org.quartz.jobStore.class = net.joelinn.quartz.jobstore.RedisJobStore
org.quartz.jobStore.host = redisbo
org.quartz.jobStore.misfireThreshold = 60000
you don't need to configure cluster, please check the source code, it is already clustered
Quartz JDBC documentation explains how it handles executing jobs in a cluster of application nodes. RedisJobStore extended that to utilize the Redis storage, and it will work in a cluster mode (Quartz cluster - not Redis cluster) by default without requiring you to enable that.
Basically Quartz uses a shared database to record which scheduler instance is currently working on a job, as opposed to direct node communication among application schedulers. When a scheduler instance picks up a job, it safely registers its instance id with the running job and persists it in the database. This support by the job store is evident in the schema used by RedisJobStore, indicated by the blocked_by fields.
I want to run a flink job on kubernetes, using a (persistent) state backend it seems like crashing taskmanagers are no issue as they can ask the jobmanager which checkpoint they need to recover from, if I understand correctly.
A crashing jobmanager seems to be a bit more difficult. On this flip-6 page I read zookeeper is needed to be able to know what checkpoint the jobmanager needs to use to recover and for leader election.
Seeing as kubernetes will restart the jobmanager whenever it crashes is there a way for the new jobmanager to resume the job without having to setup a zookeeper cluster?
The current solution we are looking at is: when kubernetes wants to kill the jobmanager (because it want to move it to another vm for example) and then create a savepoint, but this would only work for graceful shutdowns.
Edit:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-HA-with-Kubernetes-without-Zookeeper-td15033.html seems to be interesting but has no follow-up
Out of the box, Flink requires a ZooKeeper cluster to recover from JobManager crashes. However, I think you can have a lightweight implementation of the HighAvailabilityServices, CompletedCheckpointStore, CheckpointIDCounter and SubmittedJobGraphStore which can bring you quite far.
Given that you have only one JobManager running at all times (not entirely sure whether K8s can guarantee this) and that you have a persistent storage location, you could implement a CompletedCheckpointStore which retrieves the completed checkpoints from the persistent storage system (e.g. reading all stored checkpoint files). Additionally, you would have a file which contains the current checkpoint id counter for CheckpointIDCounter and all the submitted job graphs for the SubmittedJobGraphStore. So the basic idea is to store everything on a persistent volume which is accessible by the single JobManager.
I implemented a light version of file-based HA, based on Till's answer and Xeli's partial implementation.
You can find the code in this github repo - runs well in production.
Also wrote a blog series explaining how to run a job cluster on k8s in general and about this file-based HA implementation specifically.
For everyone interested in this, I currently evaluate and implement a similar solution using Kubernetes ConfigMaps and a blob store (e.g. S3) to persist job metadata overlasting JobManager restarts. No need to use local storage as the solution relies on state persisted to blob store.
Github thmshmm/flink-k8s-ha
Still some work to do (persist Checkpoint state) but the basic implementation works quite nice.
If someone likes to use multiple JobManagers, Kubernetes provides an interface to do leader elections which could be leveraged for this.
Is it possible to configure spring batch admin to start Master and slave jobs. We have one process as master and 3-4 slave nodes.
Spring batch admin is running in separate JVM process but all spring batch jobs are using same batch db schema.
Spring Batch Admin only has the abilities to launch locally deployed jobs. So while you can launch a job that has master/slave configurations, the job that owns the master must be deployed locally. You could wire things up to launch remote jobs, but you'd have to wire things up yourself.
That being said, Spring XD (http://projects.spring.io/spring-xd/) is a distributed runtime that is able to launch jobs that are remotely deployed.