Kafka service broken after applying ACL - apache-kafka

I'm configuring a kafka 3-node cluster (version 3.2.0) on which I plan to use ACL for authorization. For the moment I am using SASL for authentication and StandardAuthorizer for authorization (I am using kraft).
I set the ACL successfully with this command :
/usr/local/kafka/bin/kafka-acls.sh --command-config /usr/local/kafka/config/kraft/adminclient-config.conf --bootstrap-server <broker hostname>:9092 --add --allow-principal User:* --allow-host <ip> --operation Read --operation Write --topic <topic name>
But then whenever I restart a broker it fails with a similar error:
ERROR [StandardAuthorizer 1] addAcl error (org.apache.kafka.metadata.authorizer.StandardAuthorizerData)
java.lang.RuntimeException: An ACL with ID JjIHfwV4TMi5yo9oPXMxWw already exists.
It seems like it always tries to reapply the ACL, is this normal?
How can I fix this?
Thanks
I tried to exclude authentication issues removing the SSL settings and keeping just the SASL settings.
I would expect that on a cluster setup the addition or removal of an ACL is propagated to all the brokers, and if not at least that the broker state were not broken.

we have a cluster with the same configuration. We are facing problems configuring ACLs too. But in our case we are not able to make work the StandardAuthorizer. When starting the server, it raises a AuthorizerNotReadyException.
This is our configuration to enable the ACL authorizer in server.properties:
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin
Respect to your server.properties configuration file, do you find any difference?
Maybe if we succeed to run the same configuration, we can see if we incur in the same problem your are experiencing and look for a solution.

Related

Kafka permission on a topic creating a Group Authorization Exception

So I have a Kafka cluster running with zookeeper with SSL. I gave a read permission to a user for a specific topic on the Kafka ACL: I can see it in zookeeper.
When this user is consuming the data, they are getting a Group Authorization Exception.
Do I need to add every group to the ACL? I am confuse about this error.
Thank you
You can update your post with exception trace.
Keeping that aside, the following is the exception we receive, if any client is not Authorized to perform Produce/Consume events.
EXCEPTION="org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [<<TopicName>>]\n"; EXCEPTION_TYPE="org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: <<Topic>>\n"
If you are receiving such exception, you need to make sure you have defined your ACL principle correctly.
Principle Definition
Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP".
In order to add, remove or list ACLs you can use the Kafka authorizer CLI. By default, if no ResourcePatterns match a specific Resource R, then R has no associated acls, and therefore no one other than super users is allowed to access R. If you want to change that behaviour, you can include the following in server.properties.
Sample Principle
Suppose you want to add an ACL "Principals User:Bob and User:Alice are allowed to perform Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1". You can do that by executing the CLI with following options:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic

zookeeper is not secure even using sasl authentification

I'm configuring a kafka cluster with 3 nodes of brokers and 3 nodes of zookeeper I implemented the security as mentioned in the confluent documentation adding the attribute authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
and
requireClientAuthScheme=sasl
and added the KAFKA_OPTS=-Djava.security.auth.login.config=/usr/local/confluent/etc/kafka/zookeeper_jaas.conf to my systemd file.
but when I use zookeeper-cli from outside the cluster I can see znodes.
what I'm doing wrong
EDIT:
the config requireClientAuthScheme=sasl exit in my zookeeper properties file
P.S
the SSL and SASL weren't enabled. this may affect old znodes? do I need a migration ? to apply the security on old object created ?
You need requireClientAuthScheme=sasl in zookeeper.properties, and -Dzookeeper.requireClientAuthScheme=sasl on the Zookeeper JVM command line. And you also need to set secureClientPort and not clientPort in zookeeper.properties.
See the section "Require All Connections to use SASL Authentication" here.

How to secure kafka Topic with username and password from CLI/command line?

I have installed Docker on my Windows 10 and also installed Kafka. I have created a "test" Topic inside a Kafka cluster. Now I want to secure the Topic with a simple username and password. I am super new to Kafka, any help would really be appreciated.
To run Kafka commands, I am using windows "Power Shell".
I have tried running a few commands on the command line
To create Topics:-
kafka-topics --create --topic test --partitions 1 --replication-factor 1 --if-not-exists --zookeeper zookeeper:2181
To secure Topic I used command:
kafka-acls --topic test --producer --authorizer-properties --zookeeper zookeeper:2181 --add --allow-principal User:alice
Unfortunately, it says "bash: afka-acl: command not found"
Do I need to include anything in the Kafka configuration file? or Is it possible to just run commands from power shell and secure Topic?
Securing with username and password is the same as ACL or different?
Kafka support authentication of connections to brokers from clients (producers and consumers) using
SSL
SASL (Kerberos) and SASL/PLAIN
This need configuration changes in for both broker and clients.
What you are asking for seems like SASL plain. However as mentioned above this cannot be done from CLI and required configuration changes. If you follow the steps in the documentation link, it is pretty straightforward.
ACL is authorization which defines which user has access to what topics. See this link

Kafka zookeeper authentication not working

I am trying to enable SASL username and password for a Kafka cluster with no ssl. I followed the steps on this Stackoverflow:
Kafka SASL zookeeper authentication
SASL authentication seems to be working for Kafka brokers. consumers and producers have to authenticate before writing to or reading from a topic. So far so good.
The problem is with creating and deleting topics on kafka. when I try to use the following command for example:
~/kafka/bin/kafka-topics.sh --list --zookeeper 10.x.y.z:2181
I am able to list all topics in the kafka cluster and create or delete any topic with no authentication at all.
I tried to follow the steps here:
Super User Authentication and Authorization
but nothing seem to work.
Any help in this matter is really appreciated.
Thanks & Regards,
Firas Khasawneh
You need to add zookeeper.set.acl=true to your Kafka server.properties so that Kafka will create everything in zookeeper with ACL set. For the topics which are already there, there will be no ACL and everyone can remove them directly from zookeeper.
Actually because of that mess, I had to delete everything from my zookeeper and Kafka and start from scratch.
But once everything is set, you can open zookeeper shell to verify that the ACL is indeed set:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/your/jaas.conf" bin/zookeeper-shell.sh XXXXX:2181
From the shell you can run: getAcl /brokers/topics and check that not anyone from world have cdrwa
On a side note, the link you provided doesn't seem to reflect how the current version of Kafka stores information in zookeeper. I briefly looked at the codes and for those kafka-topics.sh commands, the topics information is from /brokers/topics instead of /config/topics

Restrict Topic creation/alteration

I've a 3-node unsecured kafka(v0.10.2.1) cluster with topic auto creation and deletion disabled with the following in server.properties
auto.create.topics.enable=false
delete.topic.enable=true
Topics are then created/altered on the cluster using bin/kafka-topics.sh. However, it looks like anyone can create topics on the cluster once they know the end points.
Is there a way to lock down topic creation/alteration to specific hosts to prevent abuses?
Edit 1:
Since ACL was suggested, I tried to restrict topic creation to select hosts using kafka-acls.sh.
I restarted the brokers after adding the following to server.properties, .
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
I tried the below to restrict topic creation on localhost.
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --cluster --operation Create --allow-host 127.0.0.1
However, I was still able to create topics from an other host using kafka-topics.sh with the right endpoints. Is it the case that ACLs can't be used without authentication?
You need to use access control lists (ACLs) to restrict such operations and that implies knowing who the caller is, so you need kafka to be secured by an authentication mechanism in the first place.
ACLs: http://kafka.apache.org/documentation.html#security_authz
Authentication can be done using SSL or SASL or by plugging in a custom provider, see the preceding sections of the same document.
Disabling auto-creation is not an access control mechanism, it only means that trying to produce to or consume from a topic will not create it automatically.