Restrict Topic creation/alteration - apache-kafka

I've a 3-node unsecured kafka(v0.10.2.1) cluster with topic auto creation and deletion disabled with the following in server.properties
auto.create.topics.enable=false
delete.topic.enable=true
Topics are then created/altered on the cluster using bin/kafka-topics.sh. However, it looks like anyone can create topics on the cluster once they know the end points.
Is there a way to lock down topic creation/alteration to specific hosts to prevent abuses?
Edit 1:
Since ACL was suggested, I tried to restrict topic creation to select hosts using kafka-acls.sh.
I restarted the brokers after adding the following to server.properties, .
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
I tried the below to restrict topic creation on localhost.
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --cluster --operation Create --allow-host 127.0.0.1
However, I was still able to create topics from an other host using kafka-topics.sh with the right endpoints. Is it the case that ACLs can't be used without authentication?

You need to use access control lists (ACLs) to restrict such operations and that implies knowing who the caller is, so you need kafka to be secured by an authentication mechanism in the first place.
ACLs: http://kafka.apache.org/documentation.html#security_authz
Authentication can be done using SSL or SASL or by plugging in a custom provider, see the preceding sections of the same document.
Disabling auto-creation is not an access control mechanism, it only means that trying to produce to or consume from a topic will not create it automatically.

Related

Kafka service broken after applying ACL

I'm configuring a kafka 3-node cluster (version 3.2.0) on which I plan to use ACL for authorization. For the moment I am using SASL for authentication and StandardAuthorizer for authorization (I am using kraft).
I set the ACL successfully with this command :
/usr/local/kafka/bin/kafka-acls.sh --command-config /usr/local/kafka/config/kraft/adminclient-config.conf --bootstrap-server <broker hostname>:9092 --add --allow-principal User:* --allow-host <ip> --operation Read --operation Write --topic <topic name>
But then whenever I restart a broker it fails with a similar error:
ERROR [StandardAuthorizer 1] addAcl error (org.apache.kafka.metadata.authorizer.StandardAuthorizerData)
java.lang.RuntimeException: An ACL with ID JjIHfwV4TMi5yo9oPXMxWw already exists.
It seems like it always tries to reapply the ACL, is this normal?
How can I fix this?
Thanks
I tried to exclude authentication issues removing the SSL settings and keeping just the SASL settings.
I would expect that on a cluster setup the addition or removal of an ACL is propagated to all the brokers, and if not at least that the broker state were not broken.
we have a cluster with the same configuration. We are facing problems configuring ACLs too. But in our case we are not able to make work the StandardAuthorizer. When starting the server, it raises a AuthorizerNotReadyException.
This is our configuration to enable the ACL authorizer in server.properties:
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin
Respect to your server.properties configuration file, do you find any difference?
Maybe if we succeed to run the same configuration, we can see if we incur in the same problem your are experiencing and look for a solution.

Kafka permission on a topic creating a Group Authorization Exception

So I have a Kafka cluster running with zookeeper with SSL. I gave a read permission to a user for a specific topic on the Kafka ACL: I can see it in zookeeper.
When this user is consuming the data, they are getting a Group Authorization Exception.
Do I need to add every group to the ACL? I am confuse about this error.
Thank you
You can update your post with exception trace.
Keeping that aside, the following is the exception we receive, if any client is not Authorized to perform Produce/Consume events.
EXCEPTION="org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [<<TopicName>>]\n"; EXCEPTION_TYPE="org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: <<Topic>>\n"
If you are receiving such exception, you need to make sure you have defined your ACL principle correctly.
Principle Definition
Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP".
In order to add, remove or list ACLs you can use the Kafka authorizer CLI. By default, if no ResourcePatterns match a specific Resource R, then R has no associated acls, and therefore no one other than super users is allowed to access R. If you want to change that behaviour, you can include the following in server.properties.
Sample Principle
Suppose you want to add an ACL "Principals User:Bob and User:Alice are allowed to perform Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1". You can do that by executing the CLI with following options:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic

Kafka: Set ACLs for multiple users when zookeeper.set.acl=true?

My setup is the following:
3 Zookeeper nodes secured in the following way:
SASL enabled (quorum.auth.enableSasl=true)
Requires SASL for learners (learnerRequireSasl=true)
Require SASL for servers (quorum.auth.serverRequireSasl=true)
Require SASL for clients (requireClientAuthScheme=sasl)
A jaas.conf file with the entries QuorumServer, QuorumLearner (both with the same zookeeper account and password), and Server (with a kafka plus a superuser account, plus passwords)
The idea of the superuser account is that I can use a separate identities and secrets (and possibly permissions) for the Kafka cluster vs. connections by admins from CLI tools.
Then...
3 Kafka nodes secured in the following way:
All listeners require SASL_PLAINTEXT (listener.security.protocol.map)
SASL mechanism is SCRAM-SHA-512 (sasl.enabled.mechanisms)
Brokers require SASL for interbroker as well as client connections (sasl.mechanism.inter.broker.protocol)
Super users: kafka, superuser
Set ACLs on all metadata that Kafka creates (zookeeper.set.acl=true). See (KIP-38)(https://cwiki.apache.org/confluence/display/KAFKA/KIP-38%3A+ZooKeeper+Authentication)
In Kafka + Zookeeper deployments with default settings, Zookeeper essentially applies no noteworthy protection mechanisms. Any rogue actor who can connect to a Zookeeper instance (e.g. after penetrating the so-called isolated network) can change Kafka metadata stored in Zookeeper at will, such as creating new Kafka users and elevating permissions.
With the zookeeper.set.acl=true setting, Kafka will automatically apply ACLs to all the Znodes it creates (for clusters, topics, offsets, etc.) so that its Znodes are protected from unauthenticated and unauthorized access = more defense in depth.
Important: These ACLs are Znode ACLs (a Zookeeper concept) and not the same as the Kafka ACLs that can be applied to clusters, topics, and the like. The zookeeper-shell.sh example below shows the subnodes of /config and the ACL set on the /config/users Znode. Only the kafka identity has full control, world has no access whatsoever:
ls /config
[changes, clients, brokers, users, topics]
getAcl /config/users
'sasl,'kafka
: cdrwa
Kafka will only set ACLs on Znodes for one account, which is typically named kafka. Certain Kafka administration tasks, such as adding Kafka users with SCRAM-SHA-512 authentication (kafka-configs.sh tool), cannot be done through Kafka brokers but require direct interaction between the CLI tool and Zookeeper.
And this finally gets me to the problem that I am facing: Because Znode ACLs automatically set by Kafka brokers are only set for the kafka identity, it is not possible to perform Zookeeper CLI operations using any other identity, such a superuser identity.
Question: Does anybody know how to make Kafka set Znode ACLs for more than just the kafka identity? Specifically, I would also like the superuser identity to be able to make modifications directly in Zookeeper.

Where does Zookeeper keep Kafka ACL list?

Where does Zookeeper(or Kafka) keep its ACL list?
When you run scripts like kafka-acls --authorizer-properties zookeeper.connect=localhost:2181 --list --topic test, where does Zookeeper (or Kafka) get its list?
I am trying to find a file that stores all the ACLs.
You can access Zookeeper using the zookeeper-shell.sh script.
There is a znode called kafka-acl where information about ACLs for group, topic, cluster and so on are stored.
You can list for example information about ACLs on topics ls /kafka-acl/Topic.
Then getting information about a specific topic with get /kafka-acl/Topic/test.
Since I landed here searching for the same information and eventually stubbled my way to the answer, I thought to add some additional information. Since Apache Kafka ver 2.0, for topics with patternType=PREFIXED, the acls are stored under a zookeeper node /kafka-acl-extended, this is in addition to the /kafka-acl node which hold topic details of patterType=LITERAL.
For more details, read - KAFKA-KIP-290
For additional reference :
If you look at your Zookeeper configuration file (zoo.cfg or zookeeper.properties), you will see the dataDir parameter, which tells you where the zookeeper stores its data.
For example,
dataDir=/tmp/confluent.iSAdMTvO/zookeeper/data
So, Kafka ACL list will be stored there, but in order to control it or view it, use
zookeeper-shell script. Because if you open the data to see, you won't be able to recognize it. See below for those who are curious.

Kafka zookeeper authentication not working

I am trying to enable SASL username and password for a Kafka cluster with no ssl. I followed the steps on this Stackoverflow:
Kafka SASL zookeeper authentication
SASL authentication seems to be working for Kafka brokers. consumers and producers have to authenticate before writing to or reading from a topic. So far so good.
The problem is with creating and deleting topics on kafka. when I try to use the following command for example:
~/kafka/bin/kafka-topics.sh --list --zookeeper 10.x.y.z:2181
I am able to list all topics in the kafka cluster and create or delete any topic with no authentication at all.
I tried to follow the steps here:
Super User Authentication and Authorization
but nothing seem to work.
Any help in this matter is really appreciated.
Thanks & Regards,
Firas Khasawneh
You need to add zookeeper.set.acl=true to your Kafka server.properties so that Kafka will create everything in zookeeper with ACL set. For the topics which are already there, there will be no ACL and everyone can remove them directly from zookeeper.
Actually because of that mess, I had to delete everything from my zookeeper and Kafka and start from scratch.
But once everything is set, you can open zookeeper shell to verify that the ACL is indeed set:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/your/jaas.conf" bin/zookeeper-shell.sh XXXXX:2181
From the shell you can run: getAcl /brokers/topics and check that not anyone from world have cdrwa
On a side note, the link you provided doesn't seem to reflect how the current version of Kafka stores information in zookeeper. I briefly looked at the codes and for those kafka-topics.sh commands, the topics information is from /brokers/topics instead of /config/topics