Securing access to REST API of Kafka Connect - rest

The REST API for Kafka Connect is not secured and authenticated.
Since its not authenticated, the configuration for a connector or Tasks are easily accessible by anyone. Since these configurations may contain about how to access the Source System [in case of SourceConnector] and destination system [in case of SinkConnector], Is there a standard way to restrict access to these APIs?

In Kafka 2.1.0, there is possibility to configure http basic authentication for REST interface of Kafka Connect without writing any custom code.
This became real due to implementation of REST extensions mechanism (see KIP-285).
Shortly, configuration procedure as follows:
Add extension class to worker configuration file:
rest.extension.classes = org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension
Create JAAS config file (i.e. connect_jaas.conf) for application name 'KafkaConnect':
KafkaConnect {
org.apache.kafka.connect.rest.basic.auth.extension.PropertyFileLoginModule required
file="/your/path/rest-credentials.properties";
};
Create rest-credentials.properties file in above-mentioned directory:
user=password
Finally, inform java about you JAAS config file, for example, by adding command-line property to java:
-Djava.security.auth.login.config=/your/path/connect_jaas.conf
After restarting Kafka Connect, you will be unable to use REST API without basic authentication.
Please keep in mind that used classes are rather examples than production-ready features.
Links:
Connect configuratin
BasicAuthSecurityRestExtension
JaasBasicAuthFilter
PropertyFileLoginModule

This is a known area in need of improvement in the future but for now you should use a firewall on the Kafka Connect machines and either an API Management tool (Apigee, etc) or a Reverse proxy (haproxy, nginx, etc.) to ensure that HTTPS is terminated at an endpoint that you can configure access control rules on and then have the firewall only accept connections from the secure proxy. With some products the firewall, access control, and SSL/TLS termination functions can be all done in a fewer number of products.

As of Kafka 1.1.0, you can set up SSL and SSL client authentication for the Kafka Connect REST API. See KIP-208 for the details.

Now you are able to enable certificate based authentication for client access to the REST API of Kafka Connect.
An example here https://github.com/sudar-path/kc-rest-mtls

Related

How to propagate truststore updates in a cluster using Wildfly?

I have an application running on Wildfly 10 in a domain setup with more than 10 machines. Clients consume REST webservices using SSL authentication, in this scenario we will be adding clients on a daily basis so it is important to be able to propagate changes on the Truststore to the whole server group.
It's not an option to centralize the truststore in one machine due to concurrency levels.
I would like to know if there is a way to achieve this using the CLI or any other alternatives.
Thanks in advance!
Given that Wildfly does not support reloading the truststore at runtime (see https://access.redhat.com/solutions/482133), you would copy the truststore file to all servers (by hand, by script, by Puppet/Ansible/your DevOps tool), and use CLI to restart the affected server groups in the domain.
See also https://github.com/wildfly/quickstart/tree/10.x/helloworld-war-ssl for an example to implemet SSL auth. Basically all clients get a certificate from your own CA, which you add to the truststore once. Then use RBAC for the authorization.

How to secure REST APIs in Spring Boot web application?

I have two Spring Boot web applications. Both applications have different databases and different sets of users. Also, both applications use Spring Security for authentication and authorisation which works properly.
At any given point I will have one instance of the first application running and multiple instances of the 2nd web application running.
I want to expose REST APIs from 1st web application (one instance running) and be able to use that REST APIs from 2nd web application (multiple instances running).
How do I make sure that REST APIs can be accessed securely with proper authentication and by instances of the 2nd applications only.
If you could change your security, I would recommend you to use OAUTH2. Basically it generates a token that is used in your APP2 instances to make the API calls.
You can see more here.
https://spring.io/guides/tutorials/spring-boot-oauth2/
http://websystique.com/spring-security/secure-spring-rest-api-using-oauth2/
But if you can't change your APP's security, you can continue using your current schema. In the APP1 you can create an user for the API calls, this user only has access to the API services. In your APP2 you need to store the credentials to access the APP1. Finally you do login into APP1 and invoke the API using HTTP client, you can use Spring RestTemplate or Apache HttpComponents Client.
SSL based authentication could be an option, if you seriously thinking about the security aspects.
Assume that you REST api exposed by App 1 is over HTTPs, then you can configure the App 1 to ask the client to give their SSL/TLS certificate when they try to access this REST API (exposed by App 1).
This will help us identify that the client is indeed a client from app 2.
Two More Cents:
In case if your App 1 REST API calls needs load balancing, NGINX should be your chose. The SSL client certificate based authentication can be offloaded to NGINX and Your Spring boot app no more worry about the SSL related configurations.
The solution we went with was to secure both using an OAuth2 client_credentials workflow. That is the OAuth2 flow where clients request a token on behalf of themselves, not a calling User.
Check out Spring Cloud Security
1) Secure your services using #EnableResourceServer
#SpringBootApplication
#EnableResourceServer
public class Application ...
2) Make calls from one service to another using an OAuth2RestTemplate
Check out Resource Server Token Relay in http://cloud.spring.io/spring-cloud-security/spring-cloud-security.html which will specify how to configure an Oauth2RestTemplate to forward on security context details (token) from one service to another.
3) Service A and Service B should be able to communicate using these techniques if they are configured using the same Oauth2 Client and Secret. This will be configured in the applications' application.properties file, hopefully injected by the environment. Oauth2 Scopes can be used as role identifiers. You could therefore say that only a Client with Scopes (api-read, api-write) should have access to Endpoint A in Service A. This is configurable using Spring Security's Authorization configuration as well as #EnableGlobalMethodSecurity

Akka remote actors filter connections by IP

I'm trying to add security to my remote actors. I've set untrusted-mode:
http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
Is it possible to add IP filtering, to allow connection only from specific server? For example I have one master and 10 slaves, I want to allow only for my master (specific IP) to connect my slaves.
In open source everyone could just create a new instance of my master, and connect to my real slaves. How can make it secure?
Using IP filtering is not very secure as it's easy to fake an IP. Luckily Akka comes with secure transport support via SSL and secure cookie support.
A cookie is like an API key and will be required to establish the connection. SSL will guarantee eavesdropping is not possible to steal the secure cookie. See this doc for example.
I made a simple project that uses Akka remoting and SSL with secure cookie. Try it out here. Read how to setup SSL certificate storage and such here.

Spring Config Server for enterprise

I am trying to setup an enterprise level Spring Config Server which will be used by multiple config client applications across the company. As the encrypt.key should be common across multiple clients, is it possible to protect sensitive information of a client application from other client applications. Am I missing something? Please help me.
That is one way to set things up. You can also let the config server handle decryption so the clients only get decrypted values and secure the connection from client to server using spring security.

Security in Cassandra

How are Cassandra clusters usually built in security way? Should they always be kept locally or are there any security functions that makes it reasonable to open up for external connections to the cluster? As far as I've understand I seems like Cassandra doesn't have any "inbuild security engine" for handling these kind of things. I'm planning on building a service to talk with the Cassandra from, should that connection be made locally (on the same net as the cluster) or from external using the DNS?
Cassandra supports builtin password authentication and authorisation since version 1.2.
User credentials and privileges are kept internally, in system auth tables. This can be viewed as its "inbuild security engine".
As for protecting connections (encryption), since version 1.2, there's SSL support for both internode and client-to-node communication. DataStax Enterprise platform additionally extends that with Kerberos/LDAP support to allow single-sign-on.
Configure a stateful firewall to allow incoming connections, but allow outgoing only if someone requested something from the server. Also C* has inbuilt SSL support, but not all APIs can use the SSL, so you'll have to pick a compatible one.