Security in Cassandra - nosql

How are Cassandra clusters usually built in security way? Should they always be kept locally or are there any security functions that makes it reasonable to open up for external connections to the cluster? As far as I've understand I seems like Cassandra doesn't have any "inbuild security engine" for handling these kind of things. I'm planning on building a service to talk with the Cassandra from, should that connection be made locally (on the same net as the cluster) or from external using the DNS?

Cassandra supports builtin password authentication and authorisation since version 1.2.
User credentials and privileges are kept internally, in system auth tables. This can be viewed as its "inbuild security engine".
As for protecting connections (encryption), since version 1.2, there's SSL support for both internode and client-to-node communication. DataStax Enterprise platform additionally extends that with Kerberos/LDAP support to allow single-sign-on.

Configure a stateful firewall to allow incoming connections, but allow outgoing only if someone requested something from the server. Also C* has inbuilt SSL support, but not all APIs can use the SSL, so you'll have to pick a compatible one.

Related

Grafana | Auth Proxy - Security

I am trying to implement Grafana Auth Proxy as documented at
https://grafana.com/docs/grafana/latest/auth/auth-proxy/
https://community.grafana.com/t/django-auth-valid-session-on-grafana-behind-nginx/2793/6
Based on how it works, it seems X-WEBAUTH-USER is set in plain text. So any one who can spoof it, can get logged in.
Grafana does have a IP Whitelist, BUT I dont think its practice to maintain IP Addresses of Docker Containers (Django and Grafana are running in separate docker containers).
Questions:
Is there a better implementation to achieve some thing more secured?
Can whitelist have a easier value?
That is design. AuthProxy offloads the authentication to your own legacy "auth" server. Of course you will need to secure connection between auth server and Grafana, so no one will be able to spoof it. For example you may create dedicated docker network (mutual TLS connection, VPN, ...), where users don't have access. The best approach depends on used infrastructure. If you are not able to secure this communication properly, then AuthProxy is not the best auth method for you.
IMHO the best authentication (and single sign on) protocol supported also by Grafana is Open ID Connect (or SAML for Grafana Enteprise). But you will need Identity Provider, which will support these standards.

What is the best set of Authorized networks setting for PostgreSQL as it is missing App Engine authorization setting

The option to authorize all apps belonging to the same project is missing in Google Cloud SQL - PostgreSQL. The documentation provide examples for authorization using the network setting 0.0.0.0/0 which simply allows all IPv4 connections.
As we do not know when the App Engine authorization feature would be available for PostgreSQL, what is the next best setting to allow the IP range of App engine instances? I am lost as they are dynamically allocated and ephemeral.
Specs
App Engine Flex (1 aspnetcore + 1 custom service on dotnet core)
Cloud SQL - PostgreSQL
Both belong to the same GCP project
The way to go in this case is to follow the documentation steps:
add 0.0.0.0/0 as the network and configure SSL access from the App Engine Flexible to the Cloud SQL PostgreSQL instance. The crucial part here is to adjust the PostgreSQL instance details, namely the SSL connections configuration. You need to allow only SSL connections to reach your instance, this way the GAE Flex instances (and only them, as having the SSL certificate) will be able to reach the instance with the database, even having dynamically allocated IPs.
To allow SSL connections only in your PostgreSQL instance:
Go to Cloud Console, choose the SQL section
Click on your PostgreSQL instance to view its details
Click the Allow only SSL connections button in the SSL tab

Securing access to REST API of Kafka Connect

The REST API for Kafka Connect is not secured and authenticated.
Since its not authenticated, the configuration for a connector or Tasks are easily accessible by anyone. Since these configurations may contain about how to access the Source System [in case of SourceConnector] and destination system [in case of SinkConnector], Is there a standard way to restrict access to these APIs?
In Kafka 2.1.0, there is possibility to configure http basic authentication for REST interface of Kafka Connect without writing any custom code.
This became real due to implementation of REST extensions mechanism (see KIP-285).
Shortly, configuration procedure as follows:
Add extension class to worker configuration file:
rest.extension.classes = org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension
Create JAAS config file (i.e. connect_jaas.conf) for application name 'KafkaConnect':
KafkaConnect {
org.apache.kafka.connect.rest.basic.auth.extension.PropertyFileLoginModule required
file="/your/path/rest-credentials.properties";
};
Create rest-credentials.properties file in above-mentioned directory:
user=password
Finally, inform java about you JAAS config file, for example, by adding command-line property to java:
-Djava.security.auth.login.config=/your/path/connect_jaas.conf
After restarting Kafka Connect, you will be unable to use REST API without basic authentication.
Please keep in mind that used classes are rather examples than production-ready features.
Links:
Connect configuratin
BasicAuthSecurityRestExtension
JaasBasicAuthFilter
PropertyFileLoginModule
This is a known area in need of improvement in the future but for now you should use a firewall on the Kafka Connect machines and either an API Management tool (Apigee, etc) or a Reverse proxy (haproxy, nginx, etc.) to ensure that HTTPS is terminated at an endpoint that you can configure access control rules on and then have the firewall only accept connections from the secure proxy. With some products the firewall, access control, and SSL/TLS termination functions can be all done in a fewer number of products.
As of Kafka 1.1.0, you can set up SSL and SSL client authentication for the Kafka Connect REST API. See KIP-208 for the details.
Now you are able to enable certificate based authentication for client access to the REST API of Kafka Connect.
An example here https://github.com/sudar-path/kc-rest-mtls

Recommended way to connect cloud foundry to mongodb atlas

I've got a spring boot app which is connected to mongodb atlas.
Everything is working locally.
I now want to publish this to pivotal cloud foundry.
Secure connection between PCF and atlas
In mongodb atlas I need to open up the firewall an allow certain ip numbers.
How should I configure mongodb atlas to connect to pcf in the most secure way?
Autoconfigure getting in the way
cloud foundry is overriding my connection urls to point to localhost:27017 instead of my atlas cluster.
What is the recommended way to connect to mongodb atlas?
In mongodb atlas I need to open up the firewall an allow certain ip numbers. How should I configure mongodb atlas to connect to pcf in the most secure way?
White listing IP addresses for applications that run on CF is not particularly effective. The reason it's not effective is that you don't know the IP address from which you'll be connecting, because it depends on where Diego decides to run your application. In other words, it depends on the cell where your application is told to run. To compound matters, that will change when you restart / restage your application.
Because the IP can vary, what you end up needing to do is white list all of your Cells. The problem with this and why it's not effective is that you've ended up white listing every app running on the platform.
What you can do to improve the security a bit is to make use of application security groups. ASG's can be used to limit outgoing traffic. You can also control them at the space level. That means you can configure your default running security group to not allow access to your MongoDb server, but you can allow access for individual spaces by binding an ASG to only those spaces with apps that need to talk to your MongoDb servers.
The downside of this approach is that it requires you to be a platform administrator, which means it will only work if you own your CF installation (not going to work for public providers).
More on ASG's here: https://docs.cloudfoundry.org/adminguide/app-sec-groups.html
For public providers, you can use a proxy. To make this work, you need to have your application configured to talk through a proxy when it attempts to access your Mongodb servers. You control the proxies, which have fixed IPs, so you can white list the proxies to allow access to just your app. If you don't want to run your own proxy servers, there are public proxy providers that you can use.
cloud foundry is overriding my connection urls to point to localhost:27017 instead of my atlas cluster. What is the recommended way to connect to mongodb atlas?
It's possible to disable auto configuration. One way is described in the docs here. If you include the Spring Cloud Connectors dependencies and use them manually, then the auto configuration will not run.
https://docs.cloudfoundry.org/buildpacks/java/spring-service-bindings.html#manual
The other option is to tell the Java build pack not to install the auto configuration. You can do that by setting the following environment variable for your application, either with cf set-env or via a manifest.yml file.
Ex: JBP_CONFIG_SPRING_AUTO_RECONFIGURATION='[enabled: false]'
Be careful if you do this as it will disable everything provided by the auto reconfiguration, which includes setting the "cloud" profile for your app. If you use this option to disable auto reconfiguration, you'll probably also want to set SPRING_PROFILES_ACTIVE='cloud' to manually enable the cloud profile.
I suppose your other option is to simply embrace the auto configuration. It's a little confusing / magical at first, but I've found this article to explain it very well.
https://spring.io/blog/2015/04/27/binding-to-data-services-with-spring-boot-in-cloud-foundry
Hope that helps!

Single Sign-On for Rich Clients ("Fat Client") without Windows Logon

single sign-on (SSO) for web applications (used through a browser) is well-documented and established. Establishing SSO for Rich Clients is harder, and is usually suggested on the basis of Kerberos tickets, in particular using a Windows login towards an ActiveDirectory in a domain.
However, I'm looking for a more generic solution for the following: I need to establish "real" SSO (one identity for all applications, i.e. not just a password synchronization across applications), where on client's side (unmanaged computers, incl. non-Windows), the "end clients" are a Java application and a GTK+ application. Both communicate with their server counterparts using a HTTP-based protocol (say, WebServices over HTTPS). The clients and the server do not necessarily sit in the same LAN/Intranet, but the client can access the servers from the extranet. The server-side of all the applications sit in the same network area, and the SSO component can access the identity provider via LDAP.
My question is basically "how can I do that"? More specifically,
a) is there an agreed-upon mechanism for secure, protected client-side "sso session storage", as it is the case with SSO cookies for browser-accessed applications? Possibly something like emulating Kerberos (TGT?) or even directly re-using it even where no ActiveDirectory authentication has been performed on the client side?
b) are there any protocols/APIs/frameworks for the communication between rich clients and the other participants of SSO (as it is the case for cookies)?
c) are there any APIs/frameworks for pushing kerberos-like TGTs and session tickets over the network?
d) are there any example implementations / tutorials available which demonstrate how to perform rich-client SSO?
I understand that there are "fill-out" agents which learn to enter the credentials into the application dialogues on the client side. I'd rather not use such a "helper" if possible.
Also, if possible, I would like to use CAS, Shibboleth and other open-source components where possible.
Thanks for comments, suggestions and answers!
MiKu
Going with AD account IS the generic solution. Kerberos is ubiquitous. This is the only mechanism which will ask you for your credentials once and just once at logon time.
This is all feasable, you need:
A KDC
Correct DNS entries
KDC accounts
Correct SPN entries
Client computers configured to talk to the KDC
Java app using JAAS with JGSS to obtain service tickets
GSS-API with your GTK+ app to obtain service tickets
What did you figure out yourself yet?
Agreed with Michael that GSSAPI/Kerberos is what you want to use. I'll add that there’s a snag with Java, however: by default, JGSS uses its own GSSAPI and Kerberos implementations, written in Java in the JDK, and not the platform’s libraries. Thus, it doesn’t obey your existing configuration and doesn’t work like anything else (e.g. on Unix it doesn’t respect KRB5CCNAME or other environment variables you’re used to, can’t use the DNS to locate KDCs, has a different set of supported ciphers, etc.). It is also buggy and limited; it can’t follow referrals, for example.
On Unix platforms, you can tell JGSS to bypass the JDK code and use an external GSSAPI library by starting the JVM with:
-Dsun.security.jgss.native=true -Dsun.security.jgss.lib=/path/to/libgssapi_krb5.so
There is no analogous option on Windows to use SSPI, however. This looks promising:
http://dblock.github.com/waffle/
... but I haven’t gotten to addressing this issue yet.