Let's imagine you are in a Kerberized Ambari environment. Zookeeper is set to SASL with only read permissions for no autheticated users.
When you start your kafka broker, it will autheticate against zookeeper as «kafka» and be able to create the znode. Looking at the zookeeper acls, «kafka» will be granted cdrwa (all) permissions on the zode, automatically.
My question is, does zookeeper have this kind of behaviour because it is in an ambari enviroment which does not restrict users in its jaas config Client section, automatically granting zookeeper acl permissions on new zNodes?
Sorry for the format, im with the mobile at 1.53 am...zzz.Zz
If you have zookeeer.set.acl set to true, than Kafka will set secured ACLs on any new created zkNode if it matches one of ZkUtils.SecureZkRootPaths parents (see source code)
Related
I have a Kafka cluster running with Zookeeper, Confluent Schema registry and Kafka security manager(KSM). KSM, https://github.com/conduktor/kafka-security-manager, is software makes it easy to manager Kafka ACL with a csv file instead of using the command line tool.
The confluent schema registry let us store Avro schema for Kafka. It is currently open and I need to secured it. I want to give every user the READ or GET permission only. I am currently using kubernetes to deploy all the tools.
How can I do that with KSM? Where can I find examples?
Thank you
Kafka ACLs don't apply to the Schema Registry, they would apply to the underlying _schemas topic, which you'd setup in the Registry's configuration
The API itself can be secured using TLS and HTTP Authentication
https://docs.confluent.io/platform/current/schema-registry/security/index.html
give every user the READ or GET permission only.
I don't think you can lock down HTTP method level access to specific users, you'll likely need a proxy for this, but also without POST, there's no way to register topics...
I'm configuring a kafka cluster with 3 nodes of brokers and 3 nodes of zookeeper I implemented the security as mentioned in the confluent documentation adding the attribute authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
and
requireClientAuthScheme=sasl
and added the KAFKA_OPTS=-Djava.security.auth.login.config=/usr/local/confluent/etc/kafka/zookeeper_jaas.conf to my systemd file.
but when I use zookeeper-cli from outside the cluster I can see znodes.
what I'm doing wrong
EDIT:
the config requireClientAuthScheme=sasl exit in my zookeeper properties file
P.S
the SSL and SASL weren't enabled. this may affect old znodes? do I need a migration ? to apply the security on old object created ?
You need requireClientAuthScheme=sasl in zookeeper.properties, and -Dzookeeper.requireClientAuthScheme=sasl on the Zookeeper JVM command line. And you also need to set secureClientPort and not clientPort in zookeeper.properties.
See the section "Require All Connections to use SASL Authentication" here.
My setup is the following:
3 Zookeeper nodes secured in the following way:
SASL enabled (quorum.auth.enableSasl=true)
Requires SASL for learners (learnerRequireSasl=true)
Require SASL for servers (quorum.auth.serverRequireSasl=true)
Require SASL for clients (requireClientAuthScheme=sasl)
A jaas.conf file with the entries QuorumServer, QuorumLearner (both with the same zookeeper account and password), and Server (with a kafka plus a superuser account, plus passwords)
The idea of the superuser account is that I can use a separate identities and secrets (and possibly permissions) for the Kafka cluster vs. connections by admins from CLI tools.
Then...
3 Kafka nodes secured in the following way:
All listeners require SASL_PLAINTEXT (listener.security.protocol.map)
SASL mechanism is SCRAM-SHA-512 (sasl.enabled.mechanisms)
Brokers require SASL for interbroker as well as client connections (sasl.mechanism.inter.broker.protocol)
Super users: kafka, superuser
Set ACLs on all metadata that Kafka creates (zookeeper.set.acl=true). See (KIP-38)(https://cwiki.apache.org/confluence/display/KAFKA/KIP-38%3A+ZooKeeper+Authentication)
In Kafka + Zookeeper deployments with default settings, Zookeeper essentially applies no noteworthy protection mechanisms. Any rogue actor who can connect to a Zookeeper instance (e.g. after penetrating the so-called isolated network) can change Kafka metadata stored in Zookeeper at will, such as creating new Kafka users and elevating permissions.
With the zookeeper.set.acl=true setting, Kafka will automatically apply ACLs to all the Znodes it creates (for clusters, topics, offsets, etc.) so that its Znodes are protected from unauthenticated and unauthorized access = more defense in depth.
Important: These ACLs are Znode ACLs (a Zookeeper concept) and not the same as the Kafka ACLs that can be applied to clusters, topics, and the like. The zookeeper-shell.sh example below shows the subnodes of /config and the ACL set on the /config/users Znode. Only the kafka identity has full control, world has no access whatsoever:
ls /config
[changes, clients, brokers, users, topics]
getAcl /config/users
'sasl,'kafka
: cdrwa
Kafka will only set ACLs on Znodes for one account, which is typically named kafka. Certain Kafka administration tasks, such as adding Kafka users with SCRAM-SHA-512 authentication (kafka-configs.sh tool), cannot be done through Kafka brokers but require direct interaction between the CLI tool and Zookeeper.
And this finally gets me to the problem that I am facing: Because Znode ACLs automatically set by Kafka brokers are only set for the kafka identity, it is not possible to perform Zookeeper CLI operations using any other identity, such a superuser identity.
Question: Does anybody know how to make Kafka set Znode ACLs for more than just the kafka identity? Specifically, I would also like the superuser identity to be able to make modifications directly in Zookeeper.
I am trying to enable SASL username and password for a Kafka cluster with no ssl. I followed the steps on this Stackoverflow:
Kafka SASL zookeeper authentication
SASL authentication seems to be working for Kafka brokers. consumers and producers have to authenticate before writing to or reading from a topic. So far so good.
The problem is with creating and deleting topics on kafka. when I try to use the following command for example:
~/kafka/bin/kafka-topics.sh --list --zookeeper 10.x.y.z:2181
I am able to list all topics in the kafka cluster and create or delete any topic with no authentication at all.
I tried to follow the steps here:
Super User Authentication and Authorization
but nothing seem to work.
Any help in this matter is really appreciated.
Thanks & Regards,
Firas Khasawneh
You need to add zookeeper.set.acl=true to your Kafka server.properties so that Kafka will create everything in zookeeper with ACL set. For the topics which are already there, there will be no ACL and everyone can remove them directly from zookeeper.
Actually because of that mess, I had to delete everything from my zookeeper and Kafka and start from scratch.
But once everything is set, you can open zookeeper shell to verify that the ACL is indeed set:
KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/your/jaas.conf" bin/zookeeper-shell.sh XXXXX:2181
From the shell you can run: getAcl /brokers/topics and check that not anyone from world have cdrwa
On a side note, the link you provided doesn't seem to reflect how the current version of Kafka stores information in zookeeper. I briefly looked at the codes and for those kafka-topics.sh commands, the topics information is from /brokers/topics instead of /config/topics
We have deployed a zookeeper cluster and already have some data. Now we need limit the access to zookeeper to ensure safety.
As far as I known, we can set acl for existing znodes. But can we limit access to zookeeper service not by acl? If so, we don't need change the acl of every znode.
It seems that zookeeper authentication is always together with acl.