TopicAuthorizationException with Kafka SASL/Plain authentication - apache-kafka

I have created below jaas file and exported using KAFKA_OPTS.
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="1nJ8ZVpGaJ"
user_admin="1nJ8ZVpGaJ"
user_kafka="kafka123$"
user_kafdrop="kafdrop123$";
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="zookeeperUser"
password="zookeeperPassword";
};
Here in my java code, I am using this user kafdrop for SASL/Plain mechanism. If I directly use this user getting TopicAuthorizationException. But If I add the same user as super user then it's working.
In case of SASL/SCRAM (SHA-512, SHA-256), we are using kafka-acl utilty to provide the access to the topics.
How to mange the topic level access in SASL/Plain to particular user. Do we need make every user as super user?
Advance in Thanks.

No, you don't need to work as a root. But, if you run kafka in docker, you need to chown your file. You can find it in Github
NOTE: As this is a non-root container, the mounted files and
directories must have the proper permissions for the UID 1001.

Related

Password rotation for kafka acl passwords which are stored in zookeeper

How to handle password rotation for kafka acls passwords?
Users cant access the kafka cluster without authentication, we are adding the user(& password) to zookeeper and adding the respective acls for the user.
Now i have a requirement for passwords rotation for these uses passwords which are stored in zookeeper
I don't think you can rotate passwords in that case without a chance of downtime (auth failures).
Ignoring the small chance of auth failures, what you could do is the following:
Change the passwords in the zookeeper using the same command that you had used to create the username/password.
Then, change your applications to use the new passwords.
Downside to this approach is that if your app restarts between steps 1 & 2 (i.e. the zookeeper has been updated with the new password, but the app is using the old password), then the app will get auth failure errors.

Encrypted Keystore instead of Location Path

I have a enterprise level Kafka hosted in AWS Cluster. I'm trying to consume a topic from AWS Cluster. I need to use SSL protocol for connecting to servers.
From documentation i found that, i need to enable few properties,
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password=test1234
I have a problem here, i cannot store the keystore.jks and truststore.jks in the source. Our security does not allow storing sensitive data in the source.
Instead I have a encrypted keystore file, which I pull it from Vault.
Is there a possibility that i can use this encrypted keystore? I don't see such feasibility in the documentation.
A local file path is needed for SSL certificates. You'll need a wrapper script before your code starts (or before the main method consumer logic) that will download the necessary information from Vault and fills in the properties files/configs
Or use a library that handles this, such as Spring Vault

Does a Security architecuture Diagram of JAAS / SASL / PLAIN / SCRAM when used with kafka or other frameworks exists?

I am not an expert but am aware of the auth mechanism that works commonly with kafka.
Common usecases/implementations for kafka use SASL Plain or SASL SCRAM i.e
security.protocol=SASL_SSL or security.protocol=SASL_PLAINTEXT(Not recommended for PLAIN mechanism)
and
sasl.mechanism=SCRAM-SHA-256 or 512 or sasl.mechanism=PLAIN (not recommended any more).
then I see JAAS configuration as below -
sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required username password
What I don't get in the picture is how JAAS fits in the client and server architecture. Is there an architecture diagram that i can refer to to get the bigger picture. I have searched over google for security architecture for kafka and how JAAS fit's in to it, but had no luck.
Could some one help.
You are setting the jaas file as a java argument in the KAFKA_OPTS or in the client properties
export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/client_jaas.conf"
Using KafkaClient {}
Or using the client configuration
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
username="user"
password="psw";
https://docs.confluent.io/platform/current/kafka/authentication_sasl/index.html
Or if you are using java spring framework check this documentation
https://docs.spring.io/spring-security/site/docs/4.2.x/reference/html/jaas.html
Jaas is the file/configuration which contains the applicative user information which authenticate to the kafka cluster

Nifi + Hortonworks Schema Registry + Kerberos: 401 Authentication required

I am ussing Apache Nifi 1.7 and I'm trying to use a RecordWritter that uses Hortonwors Schema Registry service controller to read schema metadata.
However, this controller service doesn't have any KErberos configuration properties like "KErberos Credential Service" that other Nifi processor have, so I am getting a 401 Error: Authentication required when I try to read schema from Hortonworks Schema Registry.
The intriguing thing here is that this workflow was working before, and after stopping nifi flow, moving the cluster to a different LAN and relaunching the flow again, it started to fail. I discarded any networks issues here since kerberos and schema registry keep the same URI's as before and I can make a query to registry service from the command line with curl as before.
Is there a way to make Hortonworks schema registry Controler working with Kerberos?
In 1.7.0 the only way to do is through a JAAS file with an entry for RegistryClient like:
RegistryClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="REPLACE_WITH_KEYTAB"
storeKey=true
useTicketCache=false
principal="REPLACE_WITH_PRINCIPAL";
};
Then in nifi's bootstrap.conf you need to specify the system property:
java.arg.16=-Djava.security.auth.login.config=/path/to/jaas.conf
In 1.10.0 there are new properties in the service to make it easier to configure.

LDAP ACL Plugin for Zookeeper

I have customized a new LDAP plugin to provide basic Zookeeper Authenication.
some thing like
setAcl /zookeeperPath ldap:<Group>:crwda
and when I check for the znodes
addAuth ldap:<uid>:password
will grant me access to the znodes
I know this can be done using the kerberos. But in my enterprise Linux Auth is doe through sssd. kerberos is not enabled.
I am afraid I have done some customization that should have not been done. Because, I did not get any reference from internet to do it.
If theare are any plugins thats been already used please help.
There are no ldap auth plugin for zookeeper. As zookeeper supports SASL kerberos authentication. But additional ACLs can be set using Active directory or LDAP group permissions. This can be achieved by implementing
org.apache.zookeeper.server.auth.AuthenticationProvider
and settng -D params as
-Dzookeeper.authProvider.1=class.path.to.XyzAuthenticationProvider