Kafka: Set ACLs for multiple users when zookeeper.set.acl=true? - apache-kafka

My setup is the following:
3 Zookeeper nodes secured in the following way:
SASL enabled (quorum.auth.enableSasl=true)
Requires SASL for learners (learnerRequireSasl=true)
Require SASL for servers (quorum.auth.serverRequireSasl=true)
Require SASL for clients (requireClientAuthScheme=sasl)
A jaas.conf file with the entries QuorumServer, QuorumLearner (both with the same zookeeper account and password), and Server (with a kafka plus a superuser account, plus passwords)
The idea of the superuser account is that I can use a separate identities and secrets (and possibly permissions) for the Kafka cluster vs. connections by admins from CLI tools.
Then...
3 Kafka nodes secured in the following way:
All listeners require SASL_PLAINTEXT (listener.security.protocol.map)
SASL mechanism is SCRAM-SHA-512 (sasl.enabled.mechanisms)
Brokers require SASL for interbroker as well as client connections (sasl.mechanism.inter.broker.protocol)
Super users: kafka, superuser
Set ACLs on all metadata that Kafka creates (zookeeper.set.acl=true). See (KIP-38)(https://cwiki.apache.org/confluence/display/KAFKA/KIP-38%3A+ZooKeeper+Authentication)
In Kafka + Zookeeper deployments with default settings, Zookeeper essentially applies no noteworthy protection mechanisms. Any rogue actor who can connect to a Zookeeper instance (e.g. after penetrating the so-called isolated network) can change Kafka metadata stored in Zookeeper at will, such as creating new Kafka users and elevating permissions.
With the zookeeper.set.acl=true setting, Kafka will automatically apply ACLs to all the Znodes it creates (for clusters, topics, offsets, etc.) so that its Znodes are protected from unauthenticated and unauthorized access = more defense in depth.
Important: These ACLs are Znode ACLs (a Zookeeper concept) and not the same as the Kafka ACLs that can be applied to clusters, topics, and the like. The zookeeper-shell.sh example below shows the subnodes of /config and the ACL set on the /config/users Znode. Only the kafka identity has full control, world has no access whatsoever:
ls /config
[changes, clients, brokers, users, topics]
getAcl /config/users
'sasl,'kafka
: cdrwa
Kafka will only set ACLs on Znodes for one account, which is typically named kafka. Certain Kafka administration tasks, such as adding Kafka users with SCRAM-SHA-512 authentication (kafka-configs.sh tool), cannot be done through Kafka brokers but require direct interaction between the CLI tool and Zookeeper.
And this finally gets me to the problem that I am facing: Because Znode ACLs automatically set by Kafka brokers are only set for the kafka identity, it is not possible to perform Zookeeper CLI operations using any other identity, such a superuser identity.
Question: Does anybody know how to make Kafka set Znode ACLs for more than just the kafka identity? Specifically, I would also like the superuser identity to be able to make modifications directly in Zookeeper.

Related

How to make topic creater as ACL admin for that topic

I have a Kafka cluster on which multiple teams create their topics using JAVA client (KafkaAdmin).
Now we want to enable ACLs on those topics. I know using superuser it's possible (using Kafka CLI or Admin Client).
Is it possible to let the Kafka Topic creator be the admin for that topic and the creator should be responsible for their topics ACL?

Kafka zookeeper authentication not working

I am trying to enable SASL username and password for a Kafka cluster with no ssl. I followed the steps on this Stackoverflow:
Kafka SASL zookeeper authentication
SASL authentication seems to be working for Kafka brokers. consumers and producers have to authenticate before writing to or reading from a topic. So far so good.
The problem is with creating and deleting topics on kafka. when I try to use the following command for example:
~/kafka/bin/kafka-topics.sh --list --zookeeper 10.x.y.z:2181
I am able to list all topics in the kafka cluster and create or delete any topic with no authentication at all.
I tried to follow the steps here:
Super User Authentication and Authorization
but nothing seem to work.
Any help in this matter is really appreciated.
Thanks & Regards,
Firas Khasawneh
You need to add zookeeper.set.acl=true to your Kafka server.properties so that Kafka will create everything in zookeeper with ACL set. For the topics which are already there, there will be no ACL and everyone can remove them directly from zookeeper.
Actually because of that mess, I had to delete everything from my zookeeper and Kafka and start from scratch.
But once everything is set, you can open zookeeper shell to verify that the ACL is indeed set:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/your/jaas.conf" bin/zookeeper-shell.sh XXXXX:2181
From the shell you can run: getAcl /brokers/topics and check that not anyone from world have cdrwa
On a side note, the link you provided doesn't seem to reflect how the current version of Kafka stores information in zookeeper. I briefly looked at the codes and for those kafka-topics.sh commands, the topics information is from /brokers/topics instead of /config/topics

How does Zookeeper set its zNode permissions with SASL authentication?

Let's imagine you are in a Kerberized Ambari environment. Zookeeper is set to SASL with only read permissions for no autheticated users.
When you start your kafka broker, it will autheticate against zookeeper as «kafka» and be able to create the znode. Looking at the zookeeper acls, «kafka» will be granted cdrwa (all) permissions on the zode, automatically.
My question is, does zookeeper have this kind of behaviour because it is in an ambari enviroment which does not restrict users in its jaas config Client section, automatically granting zookeeper acl permissions on new zNodes?
Sorry for the format, im with the mobile at 1.53 am...zzz.Zz
If you have zookeeer.set.acl set to true, than Kafka will set secured ACLs on any new created zkNode if it matches one of ZkUtils.SecureZkRootPaths parents (see source code)

Not Kerberized Kafka broker connection to Kerberized Zookeeper

I couldn't find any info about this issue, so I'd be glad if someone could help me on this.
I have a Kerberized cluster with services such as Hbase, MapReduce, HDFS, Zookeeper,... all kerberized and working.
Let's imagine I want to add some kafka brokers to the cluster, but I do not want to Kerberize Kafka, since a shot in the testicles makes me feel better than the idea of a kerberized Kafka.
I don't know if I'm missing something, some parameter... probably I am.. but can the zookeeper be told that also has to accept PLAINTEXT petitions for some nodes, or for some specific directories, such as kafka in the example:
zookeeper:2181/kafka
Resuming, the question is:
Is there any option to include a non kerberized Kafka Broker and make it work against the already kerberized Zookeeper in the cluster?
If you need configuration like:
[zookeeper] <----- SASL ----> [kafka] <----- non-authenticated request ---> [clients]
then yes, it's possible. You need just to
Create principal (with keytabs) for brokers that will be used to communicate with Zookeeper.
Configure Zookeeper ACLs, setting cdrwa access to the node zookeeper:2181/kafka to that user
Copy the keytab to brokers and configure Kafka jaas file like this:
ZookeeperClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/path/to/keytab"
principal="user#REALM";
};
Then, set zookeeper.set.acl=true in Kafka configuration, but do not set any authorizer.class.name (this would enable authentication for Kafka consumers and producers)

Restrict Topic creation/alteration

I've a 3-node unsecured kafka(v0.10.2.1) cluster with topic auto creation and deletion disabled with the following in server.properties
auto.create.topics.enable=false
delete.topic.enable=true
Topics are then created/altered on the cluster using bin/kafka-topics.sh. However, it looks like anyone can create topics on the cluster once they know the end points.
Is there a way to lock down topic creation/alteration to specific hosts to prevent abuses?
Edit 1:
Since ACL was suggested, I tried to restrict topic creation to select hosts using kafka-acls.sh.
I restarted the brokers after adding the following to server.properties, .
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
I tried the below to restrict topic creation on localhost.
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --cluster --operation Create --allow-host 127.0.0.1
However, I was still able to create topics from an other host using kafka-topics.sh with the right endpoints. Is it the case that ACLs can't be used without authentication?
You need to use access control lists (ACLs) to restrict such operations and that implies knowing who the caller is, so you need kafka to be secured by an authentication mechanism in the first place.
ACLs: http://kafka.apache.org/documentation.html#security_authz
Authentication can be done using SSL or SASL or by plugging in a custom provider, see the preceding sections of the same document.
Disabling auto-creation is not an access control mechanism, it only means that trying to produce to or consume from a topic will not create it automatically.