How to connect to a kerberos-protected cluster? - kerberos

I have a hadoop cluster that is kerberized. How do I connect to it from an external java application? I've been told I need to have a keytab and krb5 conf file but I'm not totally clear.

Related

How do I access a DSE cluster from an app running in a Kubernetes cluster?

I have a provided Cassandra Database installation on a server.
On the other hand my customer has a Kubernetes Cluster with a deployed application that needs to connect to the database and we expirience the following error when the container tries to start up.
WARN [com.dat.oss.dri.int.cor.con.ControlConnection] (vert.x-eventloop-thread-1) [s0] Error connecting to Node(endPoint=cassandra:9042, hostId=null, hashCode=379c44fa), trying next node (UnknownHostException: cassandra: Temporary failure in name resolution)
An suggestions what I am missing or what I need to do in my cluster?
Do you have DNS setup where the Cassandra service is available to the k8s cluster through a DNS name cassandra? Since this is an outside component, k8s relies on your external DNS resolution to discover this service.
Notice it is attempting to connect to a URL cassandra:9042. This means k8s should be able to resolve the hostname cassandra somehow, internally or externally.
If not, you have to determine your service URL, like <some-IP>:<some-Port>/some_endpoint and provide this to your k8s application, which will connect with it directly.
The issue is that you haven't configured the correct contact points in your application. In the error you posted, your application is connecting to an unknown host cassandra:
... Error connecting to Node(endPoint=cassandra:9042, ...
but your app doesn't know how to resolve the hostname cassandra leading to:
UnknownHostException: cassandra: Temporary failure in name resolution
We recommend that you specify at least two IP addresses of nodes in the "local DC" as contact points. For example if you're using the Java driver to connect to your Cassandra cluster, configure the contact points with:
datastax-java-driver {
basic {
contact-points = [ "node_IP1:9042", "node_IP2:9042" ]
}
}
Since your application is running in Kubernetes, you'll need to make sure that it has network connectivity to your Cassandra cluster. Cheers!

How can I connect to a MiniKF k8s cluster?

I installed an arrikcto/minikuf in order to have a minikf locally, but I need to connect to its k8s cluster to do port forward over its S3, in order to do this, I copied the ./kube/config of the vagrant host into the localhost but this didn't work.
So the question is how can I connect with its cluster?
Thanks

How to add a keytab for the new flume service that I created

I have a hadoop Cloudera cluster with a HDFS service having the kerberos authentication enabled.
I'm creating a flume service owning an agent deployed on an instance of my cluster. I want this role (agent issued from the flume service on the instance), to be able to write on my HDFS service.
To do that, the flume agent needs to have its keytab that will give him the keys allowing him to authenticate to the HDFS cluster.
From the cloudera documentation I read that :
At the end of the integration process using the configuration wizard, Cloudera Manager Server will create host principals and deploy keytabs for all services configured on the cluster, which means that Cloudera Manager Server requires a principal that has privileges to create these other accounts.
from here
But after instanciating my flume service I see no keytab in its user folder.
Is there something more that needs to be done to obtain the generation of this keytab ?
Not sure what kind of Cloudera installation you have, but on CDP Public Cloud keytabs aren't found in the user directories. They are found locally under /var/run/cloudera-scm-agent/process
From there you pick the most recent folder relative to the service you need. There you will find the keytab. Since a keytab allows you to authenticate as a user without knowing its password, only root can navigate those folders.

How to make jdbc PAYARA connection to CloudSQL from GKE

I have a project into PAYARA server full with JDBC Connectiones. The project works fine over VMs (actually GCP).
But I need migrate to GKE. I have a payara server full running on GCP POD but i don't know make a JDBC Connection to CLoudSQL.
Please helpme.
Take a look at the Cloud SQL connector for Java.
You might also be interested in the Cloud SQL Auth proxy, which is covered in the "Connecting from Google Kubernetes Engine".

Tell to OpenShift to use Kubernetes in a specific folder

I have to use a version of Kubernetes by me but I don't know how to tell to OpenShift to use that version of Kubernetes.
At the beginning I thought that I have to recompile the source code of OpenShift Origin and I did it. So, do someone tell me how to configure OpenShift to do what I explained above?
I use CentOS 7 on a CloudStack virtual machine.
Thanks in advance.
OpenShift can either run its own compiled-in Kubernetes components (which is the typical setup), or can run against an external Kubernetes server process. It does not manage launching an external Kubernetes binary.
You can run OpenShift against an external Kubernetes process by giving the OpenShift master a kubeconfig file containing connection information and credentials for an existing Kubernetes API server:
openshift start master --kubeconfig=/path/to/k8s.kubeconfig