both have very complementary features and it would be great to be able to have the schedule flexibility of Quartz, with the distributed computing of Apache Mesos.
I couldn't find anything on google.
Has anybody tried this?
Related
I already saw a similar question in SO, but not clearly answer my doubts.
We have different Kafka clusters and lot of exploitation operational habits around it. We have our way to start/stop the cluster, lots of exploit scripts that help maintain the cluster etc..
Now we would like to use Kafka connect connectors for new needs, but from what I saw, Kafka connect is extremely coupled to confluent-hub.
It's like I can't even use the connectors without having to install a full operational confluent-hub.
This makes it very difficult for us to use Kafka connect connectors, I understand that confluent-hub might be a framework that help running those connectors, but it's like we can't even use a dissociated Kafka cluster ( a one not exploited by confluent-hub..).
But maybe I miss something..
Do you know if there is any way to use properly Kafka connectors on a already existing Kafka cluster ( completely independent from confluent-hub) ?
EDITED :
It's more a question regarding the high coupled behaviour between confluent-hub and Kafka-connect. All the features that comes with Kafka connect ( distributed workers to handle different fail over scenarios, etc..) are not usable without confluent-hub, thus a "need" to have Kafka cluster running exclusively via confluent-hub, which is not an easy task when you already have an existing big Kafka cluster with lots of OPS habits on it.
Kafka Connect is part of Apache Kafka. It's a pluggable framework for streaming integration between systems in and out of Kafka.
To use Kafka Connect you need connectors for the specific technology with which you want to integrate. For example, S3 sink, Elasticsearch sink, JDBC source or sink, and so on.
The connector API is part of Apache Kafka, and available for anyone who wants to develop a connector.
Connectors are written by various people and organisations, and available in various different ways. How you obtain a connector depends on which connector you want, how its licensed, and how the author has made it available for distribution. It could be you go to github, clone the repo and build the JAR. It could be you can download the JAR directly.
All that Confluent Hub does is make lots of these connectors available for you in one place, easily searchable, and with an optional CLI tool that will install them for you.
Do you have to use Confluent Hub? No, not at all. Might it make your life easier in locating connectors that you want to use, and make it easier to install them? Hopefully :)
Disclaimer: I work for Confluent.
We have a kind of evaluation job which consists of several thousand invocations of a legacy binary with various inputs, each of which running like a minute. The individual runs are perfectly parallelizable (one instance per core).
What is the state of the art to do this in a hybrid cloud scenario?
Kubernetes itself does not seem to provide an interface for prioritizing or managing waiting jobs. Jenkins would be good at these points, but feels like a hack. Of course, we could hack something ourselves, but the problem should be sufficiently generic to already have an out-of-the box solution.
There are a lot of frameworks that helps managing jobs in Kubernetes cluster. The most popular are:
Argo for orchestrating parallel jobs on Kubernetes. Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).
Airflow - has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Also take a look for kubernetes-executor.
I recommend you to look for this video which describe each of framework and help you decide which is better for you.
You may be interested in following aricles about using Mesos for Hybrid Cloud
Xue, Noha & Haugerud, HĂ„rek & Yazidi, Anis. (2017). Towards a Hybrid Cloud Platform Using Apache Mesos. 143-148. 10.1007/978-3-319-60774-0_12.
Hybrid cloud technology is becoming increasingly popular as it merges private and public clouds to bring the best of two worlds together. However, due to the heterogeneous cloud installation, facilitating a hybrid cloud setup is not simple. Despite the availability of some commercial solutions to build a hybrid cloud, an open source implementation is still unavailable. In this paper, we try to bridge the gap by providing an open source implementation by leveraging the power of Apache Mesos. We build a hybrid cloud on the top of multiple cloud platforms, private and public.
Apache Mesos For All Your Hybrid Cloud Needs
Choosing the Best Approach to Hybrid Cloud
I am taking a deep look inside Flink to see how I can use it on a project and had a question for the creators / high level thinkers... why does Flink use Yarn as the default resource manager?
Was Kubernetes considered? Or is it one of those things where we started on Yarn, it works pretty well...
I have come across many projects and articles that allow Kubernetes and Yarn to work together in cluding the Myraid project that allows yarn to be deployed with Mesos (but I am on Kubernetes...)
I have a very large compute cluster 2000 or so nodes that I use and I want to use the super cool CEP features of Flink feeding off a Kafka infrastructure (also deployed on to this kubernetes environment).
I am looking to understand the reasons behind using Yarn as the resource manager underneath Flink and if would be possible (with some effort and contribution to the project) to make Kubernetes an option alongside Yarn.
Please note - I am new to Yarn - just reading up about it. Also new to Flink and learning about the deployment and scale-out architecture.
Flink is not tied to YARN. It can also run on Apache Mesos and there are also users running it on Kubernetes. In the current version (Flink 1.4.1), there are a few things to consider when running Flink in Kubernetes (see this talk by Patrick Lucas).
The Flink community is also currently working on improving Flink's support for container setups. The effort is called FLIP-6 and will be included in the next release (Flink 1.5.0).
I am working on improvement of the resource and tasks scheduling of apache flink, please share any tutorial or any document on the internal architecture of apache flink. Also share proper method of debuging the flink code.
Your help and experience of analyzing the flink code will be appreciated.
For how job scheduling works in Flink, I would start looking at https://ci.apache.org/projects/flink/flink-docs-release-1.2/internals/job_scheduling.html. Generally, the last part of the current Flink documentation contains some good information about the internals of Apache Flink.
Are there Distributed Schedulers available for free commercial use?
I saw Quartz Job Scheduler but its distributed version comes at a premium.
Thanks
You can use hinemos http://hinemos-en.atomitech.jp/ .You can control scheduling from GUI .Though spring schedular,quartz schedular provides scheduling but they provide from API level .You have to write a code for scheduling .But hinemos provides a GUI to control yours scheduling.
http://hinemos-en.atomitech.jp/ You can configure for different cluster and provide a scheduling command via GUI.
What are the available alternatives to Quartz?
There are no known competing open source projects (there are a few other open source schedulers, but they are basically just Cron replacements written in Java).
Commercially, you will want to look at the well-regarded Flux scheduler.