How to add a keytab for the new flume service that I created - kerberos

I have a hadoop Cloudera cluster with a HDFS service having the kerberos authentication enabled.
I'm creating a flume service owning an agent deployed on an instance of my cluster. I want this role (agent issued from the flume service on the instance), to be able to write on my HDFS service.
To do that, the flume agent needs to have its keytab that will give him the keys allowing him to authenticate to the HDFS cluster.
From the cloudera documentation I read that :
At the end of the integration process using the configuration wizard, Cloudera Manager Server will create host principals and deploy keytabs for all services configured on the cluster, which means that Cloudera Manager Server requires a principal that has privileges to create these other accounts.
from here
But after instanciating my flume service I see no keytab in its user folder.
Is there something more that needs to be done to obtain the generation of this keytab ?

Not sure what kind of Cloudera installation you have, but on CDP Public Cloud keytabs aren't found in the user directories. They are found locally under /var/run/cloudera-scm-agent/process
From there you pick the most recent folder relative to the service you need. There you will find the keytab. Since a keytab allows you to authenticate as a user without knowing its password, only root can navigate those folders.

Related

How to connect to AWS ECS cluster?

I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

Migrating Cockroach DB from local machine to GCP Kubernetes Engine

Followed instructions here to create a local 3 node secure cluster
Got the go example app running with the following DB connection string to connect to the secure cluster
sql.Open("postgres", "postgresql://root#localhost:26257/dbname?sslmode=verify-full&sslrootcert=<location of ca.crt>&sslcert=<location of client.root.crt>&sslkey=<location of client.root.key>")
Cockroach DB worked well locally so I decided to move the DB (as in the DB solution and not the actual data) to GCP Kubernetes Engine using the instructions here
Everything worked fine - pods created and could use the built in SQL client from the cloud console.
Now I want to use the previous example app to now connect to this new cloud DB. I created a load balancer using kubectl expose command and got a public ip to use in the code.
How do I get the new ca.crt, client.root.crt, client.root.key files to use in my connection string for the DB running on GCP?
We have 5+ developers and the idea is to have them write code on their local machines and connect to the cloud db using the connection strings and the certificates.
Or is there a better way to let 5+ developers use a single DEV DB cluster running on GCP?
The recommended way to run against a Kubernetes CockroachDB cluster is to have your apps run in the same cluster. This makes certificate generation fairly simple. See the built-in SQL client example and its config file.
The config above uses an init container to send a CSR for client certificates and makes them available to the container (in this case just the cockroach sql client, but it would be anything else).
If you wish to run a client outside the kubernetes cluster, the simplest way is to copy the generated certs directly from the client pod. It's recommended to use a non root user:
create the user through the SQL command
modify the client-secure.yaml config for your new user and start the new client pod
approve the CSR for the client certificate
wait for the pod to finish initializing
copy the ca.crt, client.<username>.crt and client.<username>.key from the pod onto your local machine
Note: the public DNS or IP address of your kubernetes cluster is most likely not included in the node certificates. You either need to modify the list of hostnames/addresses before bringing up the nodes, or change your connection URL to sslmode=verify-ca (see client connection parameters for details).
Alternatively, you could use password authentication in which case you would only need the CA certificate.

How to configure DC automatically when use Spring Cloud Consul?

When use Spring Cloud and Consul to do the service discovery, if a service wants to access another service which is located in different DCs, the DC name must be specified in the configuration like following:
spring.cloud.consul.discovery.datacenters.STORES=dc-west
In which STORES is a service name and dc-west is a DC name.
I think the ideal situation is that the DC info of any service can be acquired automatically, not be configured via any configuration file or configuration service.
So my question is that will this DC auto-discovery mechanism be provided in the future? Or can be achieved in an another way currently?

how can spring cloud dataflow app use private Docker repository?

I have a spring cloud dataflow server deployed on kubernetes cluster(not a local scdf server run from jar). Since it requires Docker images to register the apps, but my private docker repository would need credentials to pass the authentication.
Does anyone know on which configuration item/file shall I put my private docker repository credentials?
Thanks a lot!
There's no special handling required from SCDF perspective.
As far as the Kubernetes cluster and from within the backing VMs, if the Docker Deamon is logged into the private registry, at the event of app-resolution, SCDF will run it on the same Docker Deamon, so everything should work automatically.
In other words, it is a setup configuration between the Kubernetes cluster and private registry - nothing specific to SCDF.
For example, PKS and Harbor integration comes out of the box with this setup.
EDIT
If the above setup didn't work, there's the option to create Secret in Kubernetes, which can be used to generate a secret for the private registry - see docs here.
Once you've that configured, you can then pass to SCDF via spring.cloud.deployer.kubernetes.imagePullSecret property. Going by the above example in the Kubernetes docs, the value for this property would be regcred.

MongoDB MMS for cluster with keyFile auth

I have a sharded & replicated MongoDB cluster which uses keyFile auth.
I am trying to configure the MongoDB MMS Agent to communicate with all of the cluster members.
I've tried installing MMS on every cluster member and informing mms.10gen.com of the IP/port of each cluster member's address. The agent reports that it is unauthorized and I get no data.
It appears that MMS does not support keyFile auth, but is this not the standard production cluster setup?
How can I set up MMS for this kind of cluster?
I posted this on the 10gen-mms mailing list and found the answer.
keyFile authentication is meant only for intra-cluster communication and communication between MongoS instances and the cluster.
Specifying keyFile auth means that you can access the cluster without a username/password via MongoS (with keyFile) if no users exist.
If a user is created, then user auth is additionally required.
You can create a user locally on each of the MongoD instances and use that to connect directly to them without a keyFile.
So the solution was to create users for MMS and my front-end service and to use user authentication.