Does a Security architecuture Diagram of JAAS / SASL / PLAIN / SCRAM when used with kafka or other frameworks exists? - apache-kafka

I am not an expert but am aware of the auth mechanism that works commonly with kafka.
Common usecases/implementations for kafka use SASL Plain or SASL SCRAM i.e
security.protocol=SASL_SSL or security.protocol=SASL_PLAINTEXT(Not recommended for PLAIN mechanism)
and
sasl.mechanism=SCRAM-SHA-256 or 512 or sasl.mechanism=PLAIN (not recommended any more).
then I see JAAS configuration as below -
sasl.jaas.config = org.apache.kafka.common.security.scram.ScramLoginModule required username password
What I don't get in the picture is how JAAS fits in the client and server architecture. Is there an architecture diagram that i can refer to to get the bigger picture. I have searched over google for security architecture for kafka and how JAAS fit's in to it, but had no luck.
Could some one help.

You are setting the jaas file as a java argument in the KAFKA_OPTS or in the client properties
export KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/client_jaas.conf"
Using KafkaClient {}
Or using the client configuration
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required
username="user"
password="psw";
https://docs.confluent.io/platform/current/kafka/authentication_sasl/index.html
Or if you are using java spring framework check this documentation
https://docs.spring.io/spring-security/site/docs/4.2.x/reference/html/jaas.html
Jaas is the file/configuration which contains the applicative user information which authenticate to the kafka cluster

Related

Geode Authentication implementation using TLS/SSL certificate

I am trying to implement TLS based authentication, basically SSL certificate based authentication, when two-way SSL is enabled in Geode cluster. Authentication is performed based on certificate DN. Lets say client/peer node configured with two-way ssl of certificate "cn=example.com,ou=org,location=us", authentication and authorization should only be successful if "example.com" is valid cert and authorize accordingly. I see that Geode authentication implementation is based on security-username/password and security Manager Geode implementation does not provide better way to access connection peer certificates. I was not able to find related documentation, any direction on this kind of requirement would be helpful.
Thanks.
As you may already be familiar based on the comments in your question above, I encourage you to first re-review the section on SSL in the Security chapter of Apache Geode's documentation. SSL would be a prerequisite for everything I am about to suggest below.
Apache Geode's Security implementation of Authentication (as well as Authorization) is based on Apache Geode's SecurityManager interface as well as the AuthInitialize interface.
The SecurityManager is used on the server-side to authenticate clients (or additional peers joining the P2P cluster as a member). An implementation of the AuthInitialize interface is used by clients (or other peers joining the P2P cluster as a member) to supply the credentials.
The supplied SecurityManager implementation is configured with the [gemfire.]security-manager property. The AuthInitialize implementation is configured with the [gemfire.]security-client-auth-init property (or the [gemfire.]security-peer-auth-init property).
While Apache Geode's documentation commonly refers to username/password-based authentication for Geode nodes (clients and peers), the fact of the matter is, the Properties returned by the (client-side) AuthInitialize.getCredentials(..) (Javadoc) method and processed on the server-side, SecurityManager.authenticate(:Properties) (Javadoc) could contain the appropriate certificate and evidence as described (for example) here.
It is not uncommon for Password-based authentication to be used with Certificate-based authentication (both over SSL).
In which, case you could do the following. On the client-side:
package example.app.geode.security.client.auth;
import org.apache.geode.security.AuthInitialize;
class CertificateBasedAuthInitialize implements AuthInitialize {
public static CertificateBasedAuthInitialize create() {
new CertificateBasedAuthInitialize();
}
public Properties getCredentials(Properties securityProperties, DistributedMember member, boolean isServer) {
Properties credentials = new Properties(securityProperties);
// Load PrivateKey from KeyStore using java.security API.
PrivateKey privateKey = ...
Certificate clientCertificate = privateKey.getCertificate();
// Sign the some randomly generated data with the PrivateKey.
Object signedEvidence = ...
credentials.put("certificate", clientCertificate);
credentials.put("evidence", signedEvidence);
// optional...
credentials.put(AuthInitialize.SECURITY_USERNAME, username);
credentials.put(AuthInitialize.SECURITY_PASSWORD, password);
return credentials;
}
}
Then configure your client with:
# Spring Boot application.properties
spring.data.gemfire.security.client.authentication-initializer=\
example.app.geode.security.client.auth.CertificateBasedAuthInitialize.create
...
The server-side, custom SecurityManager implementation would then use the credentials to authenticate the client.
package example.app.geode.security.server.auth;
import org.apache.geode.security.SecurityManager;
class CertificateBasedSecurityManager implements SecurityManager {
public Object authenticate(Properties securityProperties) {
Certificate certificate = securityProperties.get("certificate");
Objected signedEvidence = securityProperties.get("evidence");
// verify the client's cert and use the PublicKey to verify the "evidence".
}
}
If the servers's in the Apache Geode cluster were configured and bootstrapped with Spring, then you would configure your custom SecurityManager implementation using:
# Spring Boot application.properties
spring.data.gemfire.security.manager.class-name=\
example.app.geode.security.server.auth.CertificateBasedSecurityManager
If you used Gfsh to start the Locators and Servers in your cluster, then refer to Apache Geode's documentation to configure properties on startup.
As you may also be aware (based on your tags), Apache Geode integrates with Apache Shiro. Unfortunately, I did not find any support in Apache Shiro for Certificate-based Authentication (here), which introduces the concept of Realms for auth, where the available Realms provided by Apache Shiro are here (you can see support for ActiveDirectory, JDBC, JNDI, LDAP, and text-based Realms).
Of course, you could also devise an implementation of Apache Geode's SecurityManager interface along with the AuthInitialize interface, integrated with Spring Security and follow the general advice in Baeldung's blog post.
Hopefully this gives you enough to think about and some ideas on how to go about implementing Certificate-based Authentication between clients and servers (and peers?) in your [Spring] Apache Geode application/cluster.

Encrypted Keystore instead of Location Path

I have a enterprise level Kafka hosted in AWS Cluster. I'm trying to consume a topic from AWS Cluster. I need to use SSL protocol for connecting to servers.
From documentation i found that, i need to enable few properties,
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password=test1234
I have a problem here, i cannot store the keystore.jks and truststore.jks in the source. Our security does not allow storing sensitive data in the source.
Instead I have a encrypted keystore file, which I pull it from Vault.
Is there a possibility that i can use this encrypted keystore? I don't see such feasibility in the documentation.
A local file path is needed for SSL certificates. You'll need a wrapper script before your code starts (or before the main method consumer logic) that will download the necessary information from Vault and fills in the properties files/configs
Or use a library that handles this, such as Spring Vault

Nifi + Hortonworks Schema Registry + Kerberos: 401 Authentication required

I am ussing Apache Nifi 1.7 and I'm trying to use a RecordWritter that uses Hortonwors Schema Registry service controller to read schema metadata.
However, this controller service doesn't have any KErberos configuration properties like "KErberos Credential Service" that other Nifi processor have, so I am getting a 401 Error: Authentication required when I try to read schema from Hortonworks Schema Registry.
The intriguing thing here is that this workflow was working before, and after stopping nifi flow, moving the cluster to a different LAN and relaunching the flow again, it started to fail. I discarded any networks issues here since kerberos and schema registry keep the same URI's as before and I can make a query to registry service from the command line with curl as before.
Is there a way to make Hortonworks schema registry Controler working with Kerberos?
In 1.7.0 the only way to do is through a JAAS file with an entry for RegistryClient like:
RegistryClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="REPLACE_WITH_KEYTAB"
storeKey=true
useTicketCache=false
principal="REPLACE_WITH_PRINCIPAL";
};
Then in nifi's bootstrap.conf you need to specify the system property:
java.arg.16=-Djava.security.auth.login.config=/path/to/jaas.conf
In 1.10.0 there are new properties in the service to make it easier to configure.

Cloudera does not generate missing kerberos credentials for Flume

I have installed Flume after installing main services and enabling Kerberos. And not when I run Generate Missing Credentials t says No roles required Kerberos credentials to be generated., which is wrong since Flume needs a principal flume to be created.
Is there a way to hint Cloudera to generate credentials for Flume?

LDAP ACL Plugin for Zookeeper

I have customized a new LDAP plugin to provide basic Zookeeper Authenication.
some thing like
setAcl /zookeeperPath ldap:<Group>:crwda
and when I check for the znodes
addAuth ldap:<uid>:password
will grant me access to the znodes
I know this can be done using the kerberos. But in my enterprise Linux Auth is doe through sssd. kerberos is not enabled.
I am afraid I have done some customization that should have not been done. Because, I did not get any reference from internet to do it.
If theare are any plugins thats been already used please help.
There are no ldap auth plugin for zookeeper. As zookeeper supports SASL kerberos authentication. But additional ACLs can be set using Active directory or LDAP group permissions. This can be achieved by implementing
org.apache.zookeeper.server.auth.AuthenticationProvider
and settng -D params as
-Dzookeeper.authProvider.1=class.path.to.XyzAuthenticationProvider