I'm configuring a kafka cluster with 3 nodes of brokers and 3 nodes of zookeeper I implemented the security as mentioned in the confluent documentation adding the attribute authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
and
requireClientAuthScheme=sasl
and added the KAFKA_OPTS=-Djava.security.auth.login.config=/usr/local/confluent/etc/kafka/zookeeper_jaas.conf to my systemd file.
but when I use zookeeper-cli from outside the cluster I can see znodes.
what I'm doing wrong
EDIT:
the config requireClientAuthScheme=sasl exit in my zookeeper properties file
P.S
the SSL and SASL weren't enabled. this may affect old znodes? do I need a migration ? to apply the security on old object created ?
You need requireClientAuthScheme=sasl in zookeeper.properties, and -Dzookeeper.requireClientAuthScheme=sasl on the Zookeeper JVM command line. And you also need to set secureClientPort and not clientPort in zookeeper.properties.
See the section "Require All Connections to use SASL Authentication" here.
Related
I'm configuring a kafka 3-node cluster (version 3.2.0) on which I plan to use ACL for authorization. For the moment I am using SASL for authentication and StandardAuthorizer for authorization (I am using kraft).
I set the ACL successfully with this command :
/usr/local/kafka/bin/kafka-acls.sh --command-config /usr/local/kafka/config/kraft/adminclient-config.conf --bootstrap-server <broker hostname>:9092 --add --allow-principal User:* --allow-host <ip> --operation Read --operation Write --topic <topic name>
But then whenever I restart a broker it fails with a similar error:
ERROR [StandardAuthorizer 1] addAcl error (org.apache.kafka.metadata.authorizer.StandardAuthorizerData)
java.lang.RuntimeException: An ACL with ID JjIHfwV4TMi5yo9oPXMxWw already exists.
It seems like it always tries to reapply the ACL, is this normal?
How can I fix this?
Thanks
I tried to exclude authentication issues removing the SSL settings and keeping just the SASL settings.
I would expect that on a cluster setup the addition or removal of an ACL is propagated to all the brokers, and if not at least that the broker state were not broken.
we have a cluster with the same configuration. We are facing problems configuring ACLs too. But in our case we are not able to make work the StandardAuthorizer. When starting the server, it raises a AuthorizerNotReadyException.
This is our configuration to enable the ACL authorizer in server.properties:
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin
Respect to your server.properties configuration file, do you find any difference?
Maybe if we succeed to run the same configuration, we can see if we incur in the same problem your are experiencing and look for a solution.
I am trying to enable SASL username and password for a Kafka cluster with no ssl. I followed the steps on this Stackoverflow:
Kafka SASL zookeeper authentication
SASL authentication seems to be working for Kafka brokers. consumers and producers have to authenticate before writing to or reading from a topic. So far so good.
The problem is with creating and deleting topics on kafka. when I try to use the following command for example:
~/kafka/bin/kafka-topics.sh --list --zookeeper 10.x.y.z:2181
I am able to list all topics in the kafka cluster and create or delete any topic with no authentication at all.
I tried to follow the steps here:
Super User Authentication and Authorization
but nothing seem to work.
Any help in this matter is really appreciated.
Thanks & Regards,
Firas Khasawneh
You need to add zookeeper.set.acl=true to your Kafka server.properties so that Kafka will create everything in zookeeper with ACL set. For the topics which are already there, there will be no ACL and everyone can remove them directly from zookeeper.
Actually because of that mess, I had to delete everything from my zookeeper and Kafka and start from scratch.
But once everything is set, you can open zookeeper shell to verify that the ACL is indeed set:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/your/jaas.conf" bin/zookeeper-shell.sh XXXXX:2181
From the shell you can run: getAcl /brokers/topics and check that not anyone from world have cdrwa
On a side note, the link you provided doesn't seem to reflect how the current version of Kafka stores information in zookeeper. I briefly looked at the codes and for those kafka-topics.sh commands, the topics information is from /brokers/topics instead of /config/topics
I am facing below error message when i was trying to connect and see the topic/consumer details of one of my kafka clusters we have.
we have 3 brokers in the cluster which I able to see but the topic and its partitions.
Note : I have kafka 1.0 and kafka tool version is 2.0.1
I had the same issue on my MacBook Pro. The tool was using "tshepo-mbp" as the hostname which it could not resolve. To get it to work I added 127.0.0.1 tshepo-mbp to the /etc/hosts file.
kafka tool is most likely using the hostname to connect to the broker and cannot reach it. You maybe connecting to the zookeeper host by IP address but make sure you can connect/ping the host name of the broker from the machine running the kafka tool.
If you cannot ping the broker either fix the network issues or as a workaround edit the host file on your client to let it know how to reach the broker by its name
This issue occurs if you have not set listeners and advertised.listeners property in server.properties file.
For Ex:
config/server.properties
...
listeners=PLAINTEXT://:9092
...
advertised.listeners=PLAINTEXT://<public-ip/host-name>:9092
...
To fix this issue, we need to change the server.properties file.
$ vim /usr/local/etc/kafka/server.properties
Here update the listeners value from
listeners=PLAINTEXT://:9092
to
listeners=PLAINTEXT://localhost:9092
source:https://medium.com/#Ankitthakur/apache-kafka-installation-on-mac-using-homebrew-a367cdefd273
For better visibility (even already commented the same in early days thread)
In my case, I got to know when I used Kafkatool from my local machine, tool tris to find out Kafka broker port which was blocked from my cluster admins for my local machine, that is the reason I was not able to connect.
Resolution:
Either ask the admin to open the port for intranet if they can, if they can not you can use tunnelling for your testing purpose or time being for your port.
Hope this would help a few.
I am using Kafka Version 0.10.1. I connected Kafka brokers and its clients via SSL and its working fine.Now I have a query with some limitations.
My limitations are
No Plain text communications allowed
The connection between Kafka-brokers and its clients be SSL.
The connection between Kafka-brokers and zookeeper via SASL (since zookeeper doesn't support SSL).
Since all inter-broker communications are set to SSL. I have a query that, Whether SASL connection between Zookeeper and Kafka-Broker is possible without enabling plaintext in Kafka-Broker.
Thanks in advance.
Yes it is possible to setup a Kafka cluster with Zookeeper with all the requirements you listed.
You'll need to have 2 listeners SSL and SASL_SSL (No PLAINTEXT) in your Kafka config:
listeners=SASL_SSL://host.name:port,SSL://host.name:port
Set inter broker to SSL
security.inter.broker.protocol=SSL
I suggest you check the Security section in the Kafka documentation to see what you need to do exactly to get this working, including how to configure clients so they connect over SASL_SSL: http://kafka.apache.org/documentation/#security
It also contains a section about securing Zookeeper:
http://kafka.apache.org/documentation/#zk_authz
Let's imagine you are in a Kerberized Ambari environment. Zookeeper is set to SASL with only read permissions for no autheticated users.
When you start your kafka broker, it will autheticate against zookeeper as «kafka» and be able to create the znode. Looking at the zookeeper acls, «kafka» will be granted cdrwa (all) permissions on the zode, automatically.
My question is, does zookeeper have this kind of behaviour because it is in an ambari enviroment which does not restrict users in its jaas config Client section, automatically granting zookeeper acl permissions on new zNodes?
Sorry for the format, im with the mobile at 1.53 am...zzz.Zz
If you have zookeeer.set.acl set to true, than Kafka will set secured ACLs on any new created zkNode if it matches one of ZkUtils.SecureZkRootPaths parents (see source code)