I am trying to implement TLS based authentication, basically SSL certificate based authentication, when two-way SSL is enabled in Geode cluster. Authentication is performed based on certificate DN. Lets say client/peer node configured with two-way ssl of certificate "cn=example.com,ou=org,location=us", authentication and authorization should only be successful if "example.com" is valid cert and authorize accordingly. I see that Geode authentication implementation is based on security-username/password and security Manager Geode implementation does not provide better way to access connection peer certificates. I was not able to find related documentation, any direction on this kind of requirement would be helpful.
Thanks.
As you may already be familiar based on the comments in your question above, I encourage you to first re-review the section on SSL in the Security chapter of Apache Geode's documentation. SSL would be a prerequisite for everything I am about to suggest below.
Apache Geode's Security implementation of Authentication (as well as Authorization) is based on Apache Geode's SecurityManager interface as well as the AuthInitialize interface.
The SecurityManager is used on the server-side to authenticate clients (or additional peers joining the P2P cluster as a member). An implementation of the AuthInitialize interface is used by clients (or other peers joining the P2P cluster as a member) to supply the credentials.
The supplied SecurityManager implementation is configured with the [gemfire.]security-manager property. The AuthInitialize implementation is configured with the [gemfire.]security-client-auth-init property (or the [gemfire.]security-peer-auth-init property).
While Apache Geode's documentation commonly refers to username/password-based authentication for Geode nodes (clients and peers), the fact of the matter is, the Properties returned by the (client-side) AuthInitialize.getCredentials(..) (Javadoc) method and processed on the server-side, SecurityManager.authenticate(:Properties) (Javadoc) could contain the appropriate certificate and evidence as described (for example) here.
It is not uncommon for Password-based authentication to be used with Certificate-based authentication (both over SSL).
In which, case you could do the following. On the client-side:
package example.app.geode.security.client.auth;
import org.apache.geode.security.AuthInitialize;
class CertificateBasedAuthInitialize implements AuthInitialize {
public static CertificateBasedAuthInitialize create() {
new CertificateBasedAuthInitialize();
}
public Properties getCredentials(Properties securityProperties, DistributedMember member, boolean isServer) {
Properties credentials = new Properties(securityProperties);
// Load PrivateKey from KeyStore using java.security API.
PrivateKey privateKey = ...
Certificate clientCertificate = privateKey.getCertificate();
// Sign the some randomly generated data with the PrivateKey.
Object signedEvidence = ...
credentials.put("certificate", clientCertificate);
credentials.put("evidence", signedEvidence);
// optional...
credentials.put(AuthInitialize.SECURITY_USERNAME, username);
credentials.put(AuthInitialize.SECURITY_PASSWORD, password);
return credentials;
}
}
Then configure your client with:
# Spring Boot application.properties
spring.data.gemfire.security.client.authentication-initializer=\
example.app.geode.security.client.auth.CertificateBasedAuthInitialize.create
...
The server-side, custom SecurityManager implementation would then use the credentials to authenticate the client.
package example.app.geode.security.server.auth;
import org.apache.geode.security.SecurityManager;
class CertificateBasedSecurityManager implements SecurityManager {
public Object authenticate(Properties securityProperties) {
Certificate certificate = securityProperties.get("certificate");
Objected signedEvidence = securityProperties.get("evidence");
// verify the client's cert and use the PublicKey to verify the "evidence".
}
}
If the servers's in the Apache Geode cluster were configured and bootstrapped with Spring, then you would configure your custom SecurityManager implementation using:
# Spring Boot application.properties
spring.data.gemfire.security.manager.class-name=\
example.app.geode.security.server.auth.CertificateBasedSecurityManager
If you used Gfsh to start the Locators and Servers in your cluster, then refer to Apache Geode's documentation to configure properties on startup.
As you may also be aware (based on your tags), Apache Geode integrates with Apache Shiro. Unfortunately, I did not find any support in Apache Shiro for Certificate-based Authentication (here), which introduces the concept of Realms for auth, where the available Realms provided by Apache Shiro are here (you can see support for ActiveDirectory, JDBC, JNDI, LDAP, and text-based Realms).
Of course, you could also devise an implementation of Apache Geode's SecurityManager interface along with the AuthInitialize interface, integrated with Spring Security and follow the general advice in Baeldung's blog post.
Hopefully this gives you enough to think about and some ideas on how to go about implementing Certificate-based Authentication between clients and servers (and peers?) in your [Spring] Apache Geode application/cluster.
Related
Has anybody connected Kafka and ACL to a Payara/Glassfish server that uses declared security annotations like #DeclaredRoles #RolesAllowed?
I’m interested in the translation/connection from the ACL’s roles to the roles defined in the security realm used in the Payara server.
/Jan
I want the #RolesAllowed to work with the credentials of the calling user
we just started using keycloak x quarkus distribution, and we have made a user storage and user federation spi.
they problem we are facing now is that we are unable to configure our spi in keycloak.properties to set up rest client to send request to an external quarkus api.
before moving to keycloak x we used to use unirest to send http rest requests, but since we moved to the quarkus distribution we started to use quarkus-rest-client dependency (which we use in all of our quarkus applications)
when we startup the keycloak x locally we get the following log
Unrecognized configuration key "quarkus.rest-client."path-to-rest-client-class".url" was provided; it will be ignored; verify that the dependency extension for this configuration is set or that you did not make a typo
which indicate to that keycloak x is unable to use the following dependency:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-rest-client</artifactId>
</dependency>
and unable to convert the following property in keycloak.properties quarkus.rest-client."path-to-rest-client-class".url to a property in keycloak behind the scene.
we have look at the following Keycloak.X Server Configuration which explain in details about the rules we should follow in order to write configurations in keycloak.properties, and its says that keycloak should have a custom config property for each quarkus property unless it was considered an advanced usage and not supported configuration
so is there an equivalent config property for that? and what is the best way to send a http request from quarkus based user storage spi to an external api?
I want to config Keycloak to work across multi-tenancy / realms, so how to config client to work across multi-realms?
If you have a client application that is multi-tenant aware and every tenant is mapped to a different realm, different clients within a single realm, or a combination of both, you may want to implement a KeycloakConfigResolver in your client application and keep sepearate configs per client.
Assuming you are using Java and OIDC, check out the adpater documentation for multi-tenent support.
According to documentataion for WebHDFS REST API
https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Delegation_Token_Operations
It is mentioned when security is on there is 2 mechanism
Authentication using Kerberos SPNEGO when security is on
Authentication using Hadoop delegation token when security is on
If i choose to use second option i.e Authentication using Hadoop delegation token when security is on
Does it mean it can run without Kerberos configuration in hadoop setup?
Do i have to setup Kerberos in my hadoop configuration in this case ?
To put things in context: typically, you use SPNEGO when you start your HTTP session, then cache your credentials somehow to avoid the complex rounds of 3-way communication between client, server, and Kerberos KDC.
AFAIK, all the Hadoop UIs and REST APIs use a signed cookie after the initial SPNEGO, and it's completely transparent for you -- with the exception of WebHDFS.
Now, with WebHDFS, you have to manage your "credentials cache" explicitly:
start your session with a GET ?op=GETDELEGATIONTOKEN -- you don't present any credentials, therefore it will trigger a SPNEGO authentication, then generate a Hadoop delegation token server-side
retrieve that delegation token from the JSON result
use that token to present your session credentials explicitly in the following GET / POST / PUT, by appending &delegation=XXXXXX to all URLs
Bottom line: yes, you have to set up your Kerberos configuration on client side. The delegation token only allows you to minimize the authentication overhead.
I have a Java API that talks to the Kerberos server and performs various operations. As of now, my API requests for non-renewable tickets to the Kerberos server. From what I understand, the jaas config file has an option to set the renewTGT option to true so that a renewable ticket can be issued. However, Jaas seems to have a lot of restrictions on setting the "renewUntil" time. Can anyone please tell me how we can request for arenewable ticket and also control its renewability? Basically, is there a way we can perform a Java equivalent of the operation : kinit -R ? Thanks in advance.
As of JDK7 (1.7.0_55), JAAS Krb5LoginModule does not provide any option to request a renewable TGT when authenticating, so this is not currently possible using JAAS. You might be able to achieve this, but you would need to use the internal Kerberos classes directly, bypassing JAAS.
Internally, Krb5LoginModule instantiates a sun.security.krb5.KrbAsReqBuilder to obtain credentials using either a provided password, or a keyTab. KrbAsReqBuilder has a setOptions(KDCOptions options) method, but this is not called in the login module. If it could be accessed, you could call KDCOptions#set(KDCOptions.RENEWABLE, true), and I would then expect the returned ticket to be renewable, if the KDC is configured to allow renewable tickets.