I am ussing Apache Nifi 1.7 and I'm trying to use a RecordWritter that uses Hortonwors Schema Registry service controller to read schema metadata.
However, this controller service doesn't have any KErberos configuration properties like "KErberos Credential Service" that other Nifi processor have, so I am getting a 401 Error: Authentication required when I try to read schema from Hortonworks Schema Registry.
The intriguing thing here is that this workflow was working before, and after stopping nifi flow, moving the cluster to a different LAN and relaunching the flow again, it started to fail. I discarded any networks issues here since kerberos and schema registry keep the same URI's as before and I can make a query to registry service from the command line with curl as before.
Is there a way to make Hortonworks schema registry Controler working with Kerberos?
In 1.7.0 the only way to do is through a JAAS file with an entry for RegistryClient like:
RegistryClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="REPLACE_WITH_KEYTAB"
storeKey=true
useTicketCache=false
principal="REPLACE_WITH_PRINCIPAL";
};
Then in nifi's bootstrap.conf you need to specify the system property:
java.arg.16=-Djava.security.auth.login.config=/path/to/jaas.conf
In 1.10.0 there are new properties in the service to make it easier to configure.
Related
Has anybody connected Kafka and ACL to a Payara/Glassfish server that uses declared security annotations like #DeclaredRoles #RolesAllowed?
I’m interested in the translation/connection from the ACL’s roles to the roles defined in the security realm used in the Payara server.
/Jan
I want the #RolesAllowed to work with the credentials of the calling user
I have created below jaas file and exported using KAFKA_OPTS.
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="1nJ8ZVpGaJ"
user_admin="1nJ8ZVpGaJ"
user_kafka="kafka123$"
user_kafdrop="kafdrop123$";
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="zookeeperUser"
password="zookeeperPassword";
};
Here in my java code, I am using this user kafdrop for SASL/Plain mechanism. If I directly use this user getting TopicAuthorizationException. But If I add the same user as super user then it's working.
In case of SASL/SCRAM (SHA-512, SHA-256), we are using kafka-acl utilty to provide the access to the topics.
How to mange the topic level access in SASL/Plain to particular user. Do we need make every user as super user?
Advance in Thanks.
No, you don't need to work as a root. But, if you run kafka in docker, you need to chown your file. You can find it in Github
NOTE: As this is a non-root container, the mounted files and
directories must have the proper permissions for the UID 1001.
I have a enterprise level Kafka hosted in AWS Cluster. I'm trying to consume a topic from AWS Cluster. I need to use SSL protocol for connecting to servers.
From documentation i found that, i need to enable few properties,
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password=test1234
I have a problem here, i cannot store the keystore.jks and truststore.jks in the source. Our security does not allow storing sensitive data in the source.
Instead I have a encrypted keystore file, which I pull it from Vault.
Is there a possibility that i can use this encrypted keystore? I don't see such feasibility in the documentation.
A local file path is needed for SSL certificates. You'll need a wrapper script before your code starts (or before the main method consumer logic) that will download the necessary information from Vault and fills in the properties files/configs
Or use a library that handles this, such as Spring Vault
I am not an expert but am aware of the auth mechanism that works commonly with kafka.
Common usecases/implementations for kafka use SASL Plain or SASL SCRAM i.e
security.protocol=SASL_SSL or security.protocol=SASL_PLAINTEXT(Not recommended for PLAIN mechanism)
and
sasl.mechanism=SCRAM-SHA-256 or 512 or sasl.mechanism=PLAIN (not recommended any more).
then I see JAAS configuration as below -
sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required username password
What I don't get in the picture is how JAAS fits in the client and server architecture. Is there an architecture diagram that i can refer to to get the bigger picture. I have searched over google for security architecture for kafka and how JAAS fit's in to it, but had no luck.
Could some one help.
You are setting the jaas file as a java argument in the KAFKA_OPTS or in the client properties
export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/client_jaas.conf"
Using KafkaClient {}
Or using the client configuration
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
username="user"
password="psw";
https://docs.confluent.io/platform/current/kafka/authentication_sasl/index.html
Or if you are using java spring framework check this documentation
https://docs.spring.io/spring-security/site/docs/4.2.x/reference/html/jaas.html
Jaas is the file/configuration which contains the applicative user information which authenticate to the kafka cluster
I have installed Flume after installing main services and enabling Kerberos. And not when I run Generate Missing Credentials t says No roles required Kerberos credentials to be generated., which is wrong since Flume needs a principal flume to be created.
Is there a way to hint Cloudera to generate credentials for Flume?