Kafka replicator setup architecture - apache-kafka

I have two Kafka clusters, one in London and another in NYC. Each has three Zookeeper instances and two brokers. There are two topics used in each region a InputData topic and an OutputData topic. I want each region to replicate the data from the other one, ie for them to effectively be using a global Inputdata and OutputData topic. If NYC adds two messages to this, it should be replicated to EMEA. If EMEA adds three messages, those should go to NYC.
My question is how do I achieve this? Does two way replication work or do you get into an endless loop/are there issues with concurrency. Ie what happens if NYC writes the messages locally at the same time EMEA writes its messages, then replicator tries to get the topics synched but they are now out of sync.
Is this even possible? Or can replication only work one way - ie you have to have a source topic that is only written from the main cluster, and the places it is replicated to are read only?
My second question is how do I make the replicator fault tolerant, do I run it in distributed mode with one connect worker per server - which in this case would make it two connect workers per cluster?

Related

How do I send data to multiple Kafka producers one time

I am looking for a help on Kafka producer to multiple clusters in parallel. I have two environments for pushing data to (cert and dev), every time I run producer to send data to cert and dev separately (one topic), is there away I can send data to both clusters together?
Tying your application (producers) to a particular environment topology (cert / dev) doesn't sound like the best approach. There is no way to produce from the same producer instance to two clusters - so then you would have to have two producer instances, and hope that both behave exactly the same when producing. Any problems (e.g. network glitch) that causes one to fail and the other not means you end up with divergence in your two environments.
Instead use something like Confluent Replicator or MirrorMaker 2 to stream records from one cluster to another. That way you can build your application to producer records to a target cluster, and decoupled from that populate additional environments/clusters as desired.

Apache Flink Kafka Integration Partition Seperation

I need to implement below data flow. I have one kafka topic which has 9 partitions. I can read this topic with 9 parallelism level. I have also 3 node Flink cluster. Each of nodes of this cluster has 24 task slot.
First of all, I want to spread my kafka like, each server has 3 partition like below. Order is not matter, I only transform kafka message and send it DB.
Second thing is, I want to increase my parallelism degree while saving NoSQL DB. If I increase my parallelism 48, since sending DB is IO operation, it does not consume CPU, I want to be sure, When Flink rebalance my message, my message will stay in the same server.
Is there any advice for me?
If you want to spread you Kafka readers across all 3 nodes, I would recommend to start them with 3 slots each and set the parallelism of the Kafka source to 9.
The problem is that at the moment it is not possible to control how tasks are placed if there are more slots available than the required parallelism. This means if you have fewer sources than slots, then it might happen that all sources will be deployed to one machine, leaving the other machines empty (source-wise).
Being able to spread out tasks across all available machines is a feature which the community is currently working on.

Kafka Topology Best Practice

I have 4 machines where a Kafka Cluster is configured with topology that
each machine has one zookeeper and two broker.
With this configuration what do you advice for maximum topic&partition for best performance?
Replication Factor 3:
using kafka 0.10.XX
Thanks?
Each topic is restricted to 100,000 partitions no matter how many nodes (as of July 2017)
As to the number of topics that depends on how large the smallest RAM is across the machines. This is due to Zookeeper keeping everything in memory for quick access (also it doesnt shard the znodes, just replicates across ZK nodes upon write). This effectively means once you exhaust one machines memory that ZK will fail to add more topics. You will most likely run out of file handles before reaching this limit on the Kafka broker nodes.
To quote the KAFKA docs on their site (6.1 Basic Kafka Operations https://kafka.apache.org/documentation/#basic_ops_add_topic):
Each sharded partition log is placed into its own folder under the Kafka log directory. The name of such folders consists of the topic name, appended by a dash (-) and the partition id. Since a typical folder name can not be over 255 characters long, there will be a limitation on the length of topic names. We assume the number of partitions will not ever be above 100,000. Therefore, topic names cannot be longer than 249 characters. This leaves just enough room in the folder name for a dash and a potentially 5 digit long partition id.
To quote the Zookeeper docs (https://zookeeper.apache.org/doc/trunk/zookeeperOver.html):
The replicated database is an in-memory database containing the entire data tree. Updates are logged to disk for recoverability, and writes are serialized to disk before they are applied to the in-memory database.
Performance:
Depending on your publishing and consumption semantics the topic-partition finity will change. The following are a set of questions you should ask yourself to gain insight into a potential solution (your question is very open ended):
Is the data I am publishing mission critical (i.e. cannot lose it, must be sure I published it, must have exactly once consumption)?
Should I make the producer.send() call as synchronous as possible or continue to use the asynchronous method with batching (do I trade-off publishing guarantees for speed)?
Are the messages I am publishing dependent on one another? Does message A have to be consumed before message B (implies A published before B)?
How do I choose which partition to send my message to?
Should I: assign the message to a partition (extra producer logic), let the cluster decide in a round robin fashion, or assign a key which will hash to one of the partitions for the topic (need to come up with an evenly distributed hash to get good load balancing across partitions)
How many topics should you have? How is this connected to the semantics of your data? Will auto-creating topics for many distinct logical data domains be efficient (think of the effect on Zookeeper and administrative pain to delete stale topics)?
Partitions provide parallelism (more consumers possible) and possibly increased positive load balancing effects (if producer publishes correctly). Would you want to assign parts of your problem domain elements to specific partitions (when publishing send data for client A to partition 1)? What side-effects does this have (think refactorability and maintainability)?
Will you want to make more partitions than you need so you can scale up if needed with more brokers/consumers? How realistic is automatic scaling of a KAFKA cluster given your expertise? Will this be done manually? Is manual scaling viable for your problem domain (are you building KAFKA around a fixed system with well known characteristics or are you required to be able to handle severe spikes in messages)?
How will my consumers subscribe to topics? Will they use pre-configured configurations or use a regex to consume many topics? Are the messages between topics dependent or prioritized (need extra logic on consumer to implement priority)?
Should you use different network interfaces for replication between brokers (i.e. port 9092 for producers/consumers and 9093 for replication traffic)?
Good Links:
http://cloudurable.com/ppt/4-kafka-detailed-architecture.pdf
https://www.slideshare.net/ToddPalino/putting-kafka-into-overdrive
https://www.slideshare.net/JiangjieQin/no-data-loss-pipeline-with-apache-kafka-49753844
https://kafka.apache.org/documentation/

Hint about kafka cluster setup

I have the following scenario:
4 wearable sensors attached on individuals.
Potentially infinite individuals.
A Kafka cluster.
I have to perform real-time processing on data streams on a cluster with a running instance of apache flink.
Kafka is the data hub between flink cluster and sensors.
Moreover, subject's streams are totally independent and also different streams belonging to same subject are independent each other.
I imagine this setup in my mind:
I set a specific topic for each subject and each topic is partitioned in 4 partition, each one for each sensor on specific person.
In this way I though to establish a consumer group for every topic.
Actually, my data amount is not so much big but mine interest is to build an easily scalable system. A day maybe I can have hundreds of individuals for instance...
My questions are:
Is this setup good? What do you think about it?
In this way I will have 4 kafka broker and each one handles a partition, right (without consider potential backups)?
Destroy me guys,
and thanks in advance
You can't have an infinite number of topics in a Kafka cluster so if you plan to scale beyond 10,000 or more topics then you should consider another design. Instead of giving each individual a dedicated topic, you can use an individual's ID as a key and publish data as a key/value pair to a smaller number of topics. In Kafka you can have an (almost) infinite number of keys.
Also consider more partitions. Each of your 4 brokers can handle many partitions. If you only have 4 partitions in a topic then you can only have at most 4 consumers working together in parallel in a consumer group (in your case in Flink)

what is the best approach to keep two kafka clusters in Sync

I have to setup two kafka clusters in two different data centers (DCs), which have same topics and configuration. the reason is that the connectivity between two data centers is nasty we cannot create a global one.
We are having producers and consumers to publish and subcribe to the topics of each DC.
the problem is that I need to keep both clusters in sync.
Lets say: all messages are written to the first DC should be eventually replicated to the second, and otherway around.
I am evaluation the kafka MirrorMaker tool by creating the Mirror by consuming messages of the first and procuding messages to the second one. However it is also requried to replicate data from the second to the first because writing data is both allowed in two clusters.
I dont think the Kafka MirrorMaker tool is fit to our case.
Appricate any suggestion?
Thanks in advance.
Depending on your exact requirements, you can use MirrorMaker for your use case.
One option would be to just have two separate topics, lets call them topic1 on cluster 1 and topic2 on cluster 2. All your producing threads write to the "local" topic and you use mirrormaker to replicate this topic to the remote cluster.
For your consumers, you simply subscribe to both topics on whatever cluster is closest to you, that way you will get all records that were written on either cluster.
I have created an illustration that hopefully helps:
Alternatively, you could create aggregation topics on both clusters and use MirrorMaker to replicate data into this topic, this would enable you to have all data in one topic for consumption.
You would have duplicate data on the same cluster this way, but you could take care of this by lower retention settings on the input topic.
Again, hopefully the following picture helps to explains my thinking:
In order for this to work, you will need to configure MirrorMaker to replicate a topic into a topic with a different name, which is not a standard thing for it to do, I have written a small blog post on how to do this, if you want to investigate this option further.