How to find the current controller ID, preferably using command-line, on a Kafka cluster which is using Kraft.
Kafka Version: 3.3
Probably you mean Active Controller ID.
Kafka 3.3 comes with the kafka-metadata-quorum tool.
> bin/kafka-metadata-quorum.sh --bootstrap-server broker_host:port describe --status
ClusterId: fMCL8kv1SWm87L_Md-I2hg
LeaderId: 3002
...
Docs: https://kafka.apache.org/documentation/#kraft_metadata_tool
When using KRaft, a cluster no longer has a single controller. Instead, nodes in the cluster that are running with the "controller" role, all take part in the controller metadata quorum.
The reason that tools are reporting seemingly random IDs is because of an intentional choice to return a random controller participant ID in the existing metadata APIs. This helps in distributing load equally on the nodes participating in the quorum.
Underneath, the participants of the metadata quorum are maintaining a special topic that is replicated with the Raft consensus algorithm. This topic has a leader and you can get the leader ID of that topic. But it is important to note that this is not equal to the controller on a ZK-backed cluster, that role no longer exists when running with KRaft, and as mentioned is now instead a role shared by many nodes.
You should be able to fetch the current leader ID of a cluster by requesting metadata for the __cluster_metadata topic, or as suggested in another answer by using the kafka-metadata-quorum script.
Related
We are exploring in implementing the multi-tenancy at kafka for each of our dev team which would be hosted in AWS-EKS.
For this the initial thought process is to have topic level multi-tenant.
NLB-Nginx-Ingress: ingress host-route for each team and add all the brokers in the backend, in which that team's all the topic-partition leaders are present.
access restriction through the ACLs at broker level based on principal like user.
Sample flow:
Ingress book-keeping challenges:
When someone from foobar team creates a new topic and if that lands in a new broker, we need to add that broker to the backend of the respective ingress.
If a broker goes down, again the ingress need to be updated.
Prune the brokers when the partition leader goes away due to topic deletion.
What I'm Looking for:
Apart from writing an operator or app to do the above tasks, is there any other better way to achieve this? I'm ok to completely new suggestions as well. Since this is just in the POC stage.
PS: I'm new to kafka and if this exchange is not suitable for this question, pls suggest the right exchange to post. Thanks!
First of all the ACL restrictions are cluster level and not broker level,
Secondly, for bootstraping process you need to access at least one active broker from the cluster it will send back metadata where the data leaders are and on the continuous connection the client will connect to the brokers accordingly,
there is no need to put load balancer behind kafka bootstraping , the suggestion is to put at least two brokers or more in comma separated list , the client will connect the first available and get the metadata, for further connection , client need to be able to connect to all brokers in the cluster
You can use the ACL to restrict access by principals (users) to topics in the cluster based on their need.
I'm planning to build a Kafka Cluster using two servers, and host Zookeeper on these two servers as well.
The Question is, since Kafka requires Zookeeper to run, what is the best cluster build for zookeeper to implement Kafka Cluster on two servers?
for eg. I'm currently running two zookeepers on both servers and one Kafka on each server, and in the Kafka configuration they point to all Zookeepers.
Is there a better way to do this?
First of all, you don't have to setup Zookeper and Kafka in the same server. One of the roles of Zookeeper is electing controller. (one of the brokers which is responsible for maintaining the leader/follower relationship for all the partitions) For election; majority of Zookeper nodes must be alive. In your case even one Zookeeper instance is down, you cannot select controller. So there is no difference between having one Zookeper or two. That's why it is recommended to have at least 3 nodes in Zookeeper cluster. By this way you can handle failure of one Zookeeper node.
An addition to this, it is highly recommended to have at least three brokers in your Kafka cluster to maintain both consistency and high availability. (link1, link2)
UPDATE:
As long as you are limited to only two servers, then you can consider sacrificing from high availability by set up your broker by setting min.insync.replicas=2 and having topics with replication.factor=2. If HA is more important than data loss, then you can use min.insync.replicas=1 (default) broker config with again topic replication.factor=2. In this circumstance, your options are these IMHO. (Having one or two Zookeepers is not important as I mentioned above)
I am often faced with the same problem as you do #frisky5 where i would like to achieve a "suboptimal" HA system using only 2 nodes, and thus workarounds are always needed with cloud-native frameworks that rely on the assumption that clusters will have lot of nodes available.
That ain't always the case in real life, is it ;) ?
That being said, i see you essentially having 2 options:
Externalize zookeeper configuration on a replicated storage system using 2 nodes (e.g. DRBD)
Replicate Kafka data volumes entirely on the second nodes and use 2 one-node Kafka clusters that you switch on and off depending on who is the current master node.
I would go for the first option. In that case you would have 2 Kafka servers and one zookeeper server whose ip needs to be static (virtual ip). When the zookeeper node goes down, it is restarted one the second node with same VIP, but it needs to access the synchronized data folder.
I am not too familiar with zookeepers internals and i can't tell you whether it will go in conflict when starting up on a data store who "wasn't its own" but i would guess it makes sense for you to test it using a simple rsync setup.
Another way to achieve consensus if you are using a k3s based kubernetes cluster would be to rely on internal k8s distributed consensus mechanics to "tell Kafka" which node is the leader. This works for the postgresoperator by chruncydata because Patroni is cool ( https://patroni.readthedocs.io/en/latest/kubernetes.html ) 😎 but i am not sure if Kafka/zookeeper are that flexible and can communicate with a rest API to set their locks ...
Once you have achieved this intermediate step, then you can use a PostgreSQL db as external source of truth for k3s and then it is as simple as syncing the postgres data folder between the machines (easily done with rsync). The beauty of this approach is that it is way more generic and could be used for other systems too.
Let me know what do you think about these two approaches and whether you manage to setup a test environment. If you do on GitHub i can help you out with implementation
For a cross network confluent platform, we have one kafka cluster on-premise and another on AWS in which data is replicated from on-prem to AWS using mirror maker. Both clusters are independent with their own schema-registry, rest proxy and connect.Both clusters have different set of producers and consumers and selective topics are being mirrored between clusters.
What should be the best practice to deploy schema-registry ? Should we have one master (say on-premise) and others as non-eligible masters on on-prem and AWS ?
We suspect schema-registry can have issues with respect to schema ids when topics are replicated between clusters and we have 2 masters (aws and onprem).
Thanks!
If you use two different master registries, I find that would be difficult to manage. (See mistake #2 for self-managed registries). The purpose of master.eligble=false on a second instance/cluster is that all ID registration events have a single source of truth. As the docs say, The Schema Registry nodes in both datacenters link to the primary Kafka cluster in DC A, so you would need to establish a valid network link between AWS and onprem, anyway.
Otherwise, with multiple masters, you will need to mirror the schemas topic if you want exact same subjects and schema ids between environments. However, this is primarily meant to be used as a backup, and you would eventually run into conflicting schema IDs for any producer in the destination region pushing schemas to the other master. Hence why the first diagram shows only consumers in the remote datacenter.
If you do not do this, then let's say you mirrored a topic from cluster A to cluster B, and the consumer used registry B in the settings, it would attempt to lookup an ID from registry A (which is embedded in the message), and that either would not exist or would be an incorrect ID for the topic being read.
I wrote a Kafka Connect plugin to work around that issue by registering a new ID in a remote master registry - https://github.com/cricket007/schema-registry-transfer-smt , though you said you're using MirrorMaker, so you would need to take the logic there and apply it to the MessageHandler interface in MirrorMaker
I've really only worked with one master, on-prem, and in AWS, the registry settings have Zookeeper connection pointing to the on-prem cluster settings.
And we don't mirror everything as the docs suggest, only specific topics. The purpose of using Replicator rather than MirrorMaker is that consumer failover is better supported, rather than simply getting data "over the wire", your clients are less dependent upon where they are running as well.
I am new in fabric technologies. I read some articles about the Kafka based ordering services and its advantage. Some of articles say that Kafka based multi ordering services is suitable for fault tolerance. Now i just apply 3 Kafka based ordering services(orderer0,orderer1,orderer2). Then i stopped 2 orderer using the following command
docker stop orderer1.example.com
docker stop orderer2.example.com
Now the Rest api working correctly. Then i stopped orderer0 using
docker stop orderer0.example.com
Now my Rest api is not working.It has facing network connection problem.Then I started orderer1,orderer2 using the following command
docker start orderer1.example.com
docker start orderer2.example.com
But my Rest api is not working...........It has facing the same network connection problem.
And finally I started orderer0 using
docker start orderer0.example.com
Now the network is working fine.
My questions is
What is actual use of Kafka based ordering services..??
How we can implement Kafka based ordering service for prevent the orderer downing problem...??
Fabric:1.1.0
Composer:0.19.16
Node:8.11.3
OS: Ubuntu 16.04
I had the same problem as you when I wanted to set up several orderer. To solve this problem I have 2 solutions:
I changed the SDK, currently your SDK tries to contact the orderer0 if it fails it returns an error, it is necessary to change this so that the request loop on a list of orderer and returns an error if no is valid.
easier: set up a load-balancer upstream of the orderers.
To answer your question. The advantage of setting up Kafka based ordering services is that the data of the proposed blocks are spread over several servers. There is a fault tolerance because if an orderer crashes and reconnects to the kafka cluster it will be able to resynchronize. The performances are better (it's theoretical I did not test on this point)
As per Kafka Ordering Services
Each channel maps to a separate single-partition topic in Kafka
This means that all messages in the topic are totally-ordered in the order in which they were sent.
and
At a minimum, [the number of brokers] should be set to 4. (As we will explain in Step 4 below, this is the minimum number of nodes necessary in order to exhibit crash fault tolerance, i.e. with 4 brokers, you can have 1 broker go down, all channels will continue to be writeable and readable, and new channels can be created.)
The above assumes a Kafka replication factor of 3 and the producing client to set min.insync.replicas ideally to 2 to make sure that all writes are replicated to at least two servers.
Based on your network issues, this sounds to me like you did not actually configure all three brokers correctly (would need to see your entire Docker setup and what the Dockerfile is actually doing). But, assuming you did configure all three brokers for this "REST API", and there is a single-partition Kafka topic with 3 replicas (the default replication is 1, and topics are auto-created with this). So, I suggest you clean it all, then start three brokers, then manually create the topic with 1 partition, 3 replicas, then start Hyperledger.
If the REST API is the actual problem, not the Kafka connection, then you would need a load-balancer, I guess
I would like to deploy a Kafka cluster in two datacenters with the same number of nodes on each DC. The first DC is used in active mode while the second is in passive mode.
For example, let say that both datacenters have 3 nodes with 2 in-sync replica (ISR) on the first DC and one ISR on the second DC.
Is it possible to have a third DC containing an arbiter/witness/observer node such that in case of failure of one DC, a leader election can succeed with the correct outcome in term of consistency? mongoDB has such feature named Replica set Arbiter.
What about deploying ZooKeeper on the three datacenters? From my understanding ZooKeeper does not hold the Kafka data and it should not be contacted for each new record in the Kafka topic, i.e. you do not pay the latency to the third DC for each new record.
There is one presentation at the Kafka summit 2017 One Data Center is Not Enough: Scaling Apache Kafka Across Multiple Data Centers speaking about this setup. There is also some interesting information inside a Confluent whitepaper Disaster Recovery for Multi-Datacenter Apache Kafka® Deployments.
It says it could work and they called it an observer node but it also says no one has ever tried this.
Zookeeper keeps tracks of the following metadata for Kafka (0.9.0+).
Electing a controller - The controller is one of the brokers and is responsible for maintaining the leader/follower relationship for all the partitions. When a node shuts down, it is the controller that tells other replicas to become partition leaders to replace the partition leaders on the node that is going away. Zookeeper is used to elect a controller, make sure there is only one and elect a new one it if it crashes.
Cluster membership - which brokers are alive and part of the cluster? this is also managed through ZooKeeper.
Topic configuration - what overrides are there for that topic, where are the partitions located etc.
Quotas - how much data is each client allowed to read and write
ACLs - who is allowed to read and write to which topic
More detail on the dependency between Kafka and Zookeeper on the Kafka FAQ and answer at Quora from a Kafka commiter working at Confluent.
From the resources I have read, a setup with two DC (Kafka plus Zookeeper ) and an arbiter/witness/observer Zookeeper node on a third DC with high latency could work but I haven't found any resources that has actually experimented it.