I couldn't find any info about this issue, so I'd be glad if someone could help me on this.
I have a Kerberized cluster with services such as Hbase, MapReduce, HDFS, Zookeeper,... all kerberized and working.
Let's imagine I want to add some kafka brokers to the cluster, but I do not want to Kerberize Kafka, since a shot in the testicles makes me feel better than the idea of a kerberized Kafka.
I don't know if I'm missing something, some parameter... probably I am.. but can the zookeeper be told that also has to accept PLAINTEXT petitions for some nodes, or for some specific directories, such as kafka in the example:
zookeeper:2181/kafka
Resuming, the question is:
Is there any option to include a non kerberized Kafka Broker and make it work against the already kerberized Zookeeper in the cluster?
If you need configuration like:
[zookeeper] <----- SASL ----> [kafka] <----- non-authenticated request ---> [clients]
then yes, it's possible. You need just to
Create principal (with keytabs) for brokers that will be used to communicate with Zookeeper.
Configure Zookeeper ACLs, setting cdrwa access to the node zookeeper:2181/kafka to that user
Copy the keytab to brokers and configure Kafka jaas file like this:
ZookeeperClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/path/to/keytab"
principal="user#REALM";
};
Then, set zookeeper.set.acl=true in Kafka configuration, but do not set any authorizer.class.name (this would enable authentication for Kafka consumers and producers)
Related
I've enabled SASL PLAIN authentication for my Zookeper and Broker. It seems working, I can only see topics and their content by using the credentials I set. The problem is, even though the status for all connectors were "RUNNING", there wasn't any data coming to kafka topics. So I restarted Kafka Connect and now I can't connect it, a connection refused error occurs.
It was already confusing me, how does Kafka Connect establish a connection with a SASL activated broker? It needs to be authenticated to be able to write data to a topic right? How can I do that? For example; I've provided the Schema Registry basic authentication information for Kafka Connect in connect-distributed.properties file like that:
schema.registry.basic.auth.user.info=admin:secret
key.converter.basic.auth.user.info=admin:secret
value.converter.basic.auth.user.info=admin:secret
schema.registry.basic.auth.credentials.source=USER_INFO
key.converter.basic.auth.credentials.source=USER_INFO
value.converter.basic.auth.credentials.source=USER_INFO
I believe I need to do something similar. But in tutorials I didn't see anything about that.
EDIT:
Connect service seems to be runnning, but connectors can't fetch the metadata of topics. That means there is a problem with authentication to Kafka.
It seems to be working with below configuration. I am not sure if you need to add producer. and consumer. parts but they don't cause any problems. I've added these lines to connect-distributed.properties file.
sasl.mechanism=PLAIN
security.protocol=SASL_PLAINTEXT
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="secret";
producer.sasl.mechanism=PLAIN
producer.security.protocol=SASL_PLAINTEXT
producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="secret";
consumer.sasl.mechanism=PLAIN
consumer.security.protocol=SASL_PLAINTEXT
consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="secret";
I am trying to enable SASL username and password for a Kafka cluster with no ssl. I followed the steps on this Stackoverflow:
Kafka SASL zookeeper authentication
SASL authentication seems to be working for Kafka brokers. consumers and producers have to authenticate before writing to or reading from a topic. So far so good.
The problem is with creating and deleting topics on kafka. when I try to use the following command for example:
~/kafka/bin/kafka-topics.sh --list --zookeeper 10.x.y.z:2181
I am able to list all topics in the kafka cluster and create or delete any topic with no authentication at all.
I tried to follow the steps here:
Super User Authentication and Authorization
but nothing seem to work.
Any help in this matter is really appreciated.
Thanks & Regards,
Firas Khasawneh
You need to add zookeeper.set.acl=true to your Kafka server.properties so that Kafka will create everything in zookeeper with ACL set. For the topics which are already there, there will be no ACL and everyone can remove them directly from zookeeper.
Actually because of that mess, I had to delete everything from my zookeeper and Kafka and start from scratch.
But once everything is set, you can open zookeeper shell to verify that the ACL is indeed set:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/your/jaas.conf" bin/zookeeper-shell.sh XXXXX:2181
From the shell you can run: getAcl /brokers/topics and check that not anyone from world have cdrwa
On a side note, the link you provided doesn't seem to reflect how the current version of Kafka stores information in zookeeper. I briefly looked at the codes and for those kafka-topics.sh commands, the topics information is from /brokers/topics instead of /config/topics
I have list of Brokers for my Kafka cluster. How can I get the zookeeper host using Brokerslist?
If I got your question right you want to register your brokers at a zookeeper cluster. This actually works the other way round: You have to tell each broker where your zookeeper-server (or cluster) can be found. Have a look at the broker configuration setting zookeeper.connect. Together with the broker.id it will register each broker at the zookeeper cluster.
Example:
broker.id=1
zookeeper.connect=zk-host-1:2181,zk-host-2:2181,zk-host-3:2181
Hope that answers your question.
You cannot.
Zookeeper is intended to be abstracted away. There is no such API or method to get Zookeepers connected to a broker.
You'll need to SSH to a broker in that list (which you could do from Java}
I have seen some similar questions as follows:
www.quora.com/What-is-the-actual-role-of-Zookeeper-in-Kafka-What-benefits-will-I-miss-out-on-if-I-don%E2%80%99t-use-Zookeeper-and-Kafka-together
Is Zookeeper a must for Kafka?
But I want to know the latest information about this question.
What is the actual role of ZooKeeper in Kafka 2.1?
Zookeeper is required to run a Kafka Cluster.
It is used by Kafka brokers to perform elections (controller and topic leaders), to store topic metadata and various other things (ACLs, dynamic broker configs, quotas, Producer Ids)
Since Kafka 0.9, clients don't require access to Zookeeper, only brokers rely on it.
My company is about to introduce kafka. However, i was not able to conprehend why either zookeeper or kafka confinguration, does not require to specify one or another existence.
For example, i neither find definition of kafka ip in zookeeper nor in kakfa definition of zookeeper ip in their config.
Can someone explain ?
for kafka server you should have server.properties file. It contains property zookeeper.connect
official documentation: https://kafka.apache.org/documentation/#brokerconfigs