We have deployed a zookeeper cluster and already have some data. Now we need limit the access to zookeeper to ensure safety.
As far as I known, we can set acl for existing znodes. But can we limit access to zookeeper service not by acl? If so, we don't need change the acl of every znode.
It seems that zookeeper authentication is always together with acl.
Related
We are exploring in implementing the multi-tenancy at kafka for each of our dev team which would be hosted in AWS-EKS.
For this the initial thought process is to have topic level multi-tenant.
NLB-Nginx-Ingress: ingress host-route for each team and add all the brokers in the backend, in which that team's all the topic-partition leaders are present.
access restriction through the ACLs at broker level based on principal like user.
Sample flow:
Ingress book-keeping challenges:
When someone from foobar team creates a new topic and if that lands in a new broker, we need to add that broker to the backend of the respective ingress.
If a broker goes down, again the ingress need to be updated.
Prune the brokers when the partition leader goes away due to topic deletion.
What I'm Looking for:
Apart from writing an operator or app to do the above tasks, is there any other better way to achieve this? I'm ok to completely new suggestions as well. Since this is just in the POC stage.
PS: I'm new to kafka and if this exchange is not suitable for this question, pls suggest the right exchange to post. Thanks!
First of all the ACL restrictions are cluster level and not broker level,
Secondly, for bootstraping process you need to access at least one active broker from the cluster it will send back metadata where the data leaders are and on the continuous connection the client will connect to the brokers accordingly,
there is no need to put load balancer behind kafka bootstraping , the suggestion is to put at least two brokers or more in comma separated list , the client will connect the first available and get the metadata, for further connection , client need to be able to connect to all brokers in the cluster
You can use the ACL to restrict access by principals (users) to topics in the cluster based on their need.
My setup is the following:
3 Zookeeper nodes secured in the following way:
SASL enabled (quorum.auth.enableSasl=true)
Requires SASL for learners (learnerRequireSasl=true)
Require SASL for servers (quorum.auth.serverRequireSasl=true)
Require SASL for clients (requireClientAuthScheme=sasl)
A jaas.conf file with the entries QuorumServer, QuorumLearner (both with the same zookeeper account and password), and Server (with a kafka plus a superuser account, plus passwords)
The idea of the superuser account is that I can use a separate identities and secrets (and possibly permissions) for the Kafka cluster vs. connections by admins from CLI tools.
Then...
3 Kafka nodes secured in the following way:
All listeners require SASL_PLAINTEXT (listener.security.protocol.map)
SASL mechanism is SCRAM-SHA-512 (sasl.enabled.mechanisms)
Brokers require SASL for interbroker as well as client connections (sasl.mechanism.inter.broker.protocol)
Super users: kafka, superuser
Set ACLs on all metadata that Kafka creates (zookeeper.set.acl=true). See (KIP-38)(https://cwiki.apache.org/confluence/display/KAFKA/KIP-38%3A+ZooKeeper+Authentication)
In Kafka + Zookeeper deployments with default settings, Zookeeper essentially applies no noteworthy protection mechanisms. Any rogue actor who can connect to a Zookeeper instance (e.g. after penetrating the so-called isolated network) can change Kafka metadata stored in Zookeeper at will, such as creating new Kafka users and elevating permissions.
With the zookeeper.set.acl=true setting, Kafka will automatically apply ACLs to all the Znodes it creates (for clusters, topics, offsets, etc.) so that its Znodes are protected from unauthenticated and unauthorized access = more defense in depth.
Important: These ACLs are Znode ACLs (a Zookeeper concept) and not the same as the Kafka ACLs that can be applied to clusters, topics, and the like. The zookeeper-shell.sh example below shows the subnodes of /config and the ACL set on the /config/users Znode. Only the kafka identity has full control, world has no access whatsoever:
ls /config
[changes, clients, brokers, users, topics]
getAcl /config/users
'sasl,'kafka
: cdrwa
Kafka will only set ACLs on Znodes for one account, which is typically named kafka. Certain Kafka administration tasks, such as adding Kafka users with SCRAM-SHA-512 authentication (kafka-configs.sh tool), cannot be done through Kafka brokers but require direct interaction between the CLI tool and Zookeeper.
And this finally gets me to the problem that I am facing: Because Znode ACLs automatically set by Kafka brokers are only set for the kafka identity, it is not possible to perform Zookeeper CLI operations using any other identity, such a superuser identity.
Question: Does anybody know how to make Kafka set Znode ACLs for more than just the kafka identity? Specifically, I would also like the superuser identity to be able to make modifications directly in Zookeeper.
Setup :- We have 3 Schema registry instance behind AWS ELB. how to change the schema_registry.properties file to setup schema registry in cluster mode?
We are calling schema registry with ELB endpoint.
The cluster of Schema Registry instances will be established by each instance contacting the same ZooKeeper cluster, so you'll want to basically have each instance have the same configuration. A single master will be elected using the strategy in the docs and any follower that receives a write request will just forward that request to the leader. If for some reason you only want certain instances to be master eligible, you can set master.eligbility=false in your properties file. If you want to get fancy and set non-default advertised listeners for your instances, then those have to be unique per instance (they are host:port combinations so this should be expected).
Let's imagine you are in a Kerberized Ambari environment. Zookeeper is set to SASL with only read permissions for no autheticated users.
When you start your kafka broker, it will autheticate against zookeeper as «kafka» and be able to create the znode. Looking at the zookeeper acls, «kafka» will be granted cdrwa (all) permissions on the zode, automatically.
My question is, does zookeeper have this kind of behaviour because it is in an ambari enviroment which does not restrict users in its jaas config Client section, automatically granting zookeeper acl permissions on new zNodes?
Sorry for the format, im with the mobile at 1.53 am...zzz.Zz
If you have zookeeer.set.acl set to true, than Kafka will set secured ACLs on any new created zkNode if it matches one of ZkUtils.SecureZkRootPaths parents (see source code)
I am new to SolrCloud(4.X), Can anybody explain in detail about roles and responsibility of zookeeper in SolrCloud?
Also how does zookeper work in regards to search/add request to Solr?
Zookeepers are a central repository for SolrCloud configuration. You can consider it as a distributed filesystem which can be accessed by all Solr nodes in the cluster. So if you change any config file you just need to inform or upload it to Zookeeper and not on every node in the cluster.
One more important responsibility of Zookeeper is to keep an eye on the state of all Solr nodes in the cluster. If any node goes down and a search request comes in for that node, Zookeeper routes it to an alternative replica node.
When you are updating any document in SolrCloud, it is zookeeper who delegates your update request to the appropriate node in the cloud holding the document
For in depth details you should be reading this,
https://cwiki.apache.org/confluence/display/ZOOKEEPER/ProjectDescription