How to get the password from Kerberos principal - kerberos

For all the auto-generated Kerberos principals, for example HDFS, Hadoop, Livy, how can I get their passwords so that I can try kinit with it?
I created a Kerberized cluster in AWS EMR and by default it auto-generated all these principals, and now I want to actually be able to authenticate Kerberos with them, but I don't know their passwords.
How can I get their passwords, and since I have their keytabs can I get their passwords from the keytabs?

Related

Password rotation for kafka acl passwords which are stored in zookeeper

How to handle password rotation for kafka acls passwords?
Users cant access the kafka cluster without authentication, we are adding the user(& password) to zookeeper and adding the respective acls for the user.
Now i have a requirement for passwords rotation for these uses passwords which are stored in zookeeper
I don't think you can rotate passwords in that case without a chance of downtime (auth failures).
Ignoring the small chance of auth failures, what you could do is the following:
Change the passwords in the zookeeper using the same command that you had used to create the username/password.
Then, change your applications to use the new passwords.
Downside to this approach is that if your app restarts between steps 1 & 2 (i.e. the zookeeper has been updated with the new password, but the app is using the old password), then the app will get auth failure errors.

AWS IAM User Access for Developer

I want to give access to my developer to my MongoDB which is hosted by an EC2 Instance on AWS.
He should be able to make mongodump, upload the new backend and do some changes on our control Panel.
I created an IAM User with EC2FullAccess Permissions - I have seen that he was able to add his own IP to the Security Group so he could connect.
I don't feel so comfortable with that - what should I do, to secure myself that he has just enough access to do the necessary work:
Upload new code to server
Do MongoDB dump
I don't want him to be able to switch off/delete my instance or be able to delete my database at all.
Looking at your use case, you do not need to give any EC2 permissions, your developer does not even need IAM user, he can simply have the IP of the instance and the login credentials to the EC2 Instance, that should be suffice to log in to the instance and make the required changes. No need for an IAM user or AWS Console access.
IAM roles are for the purpose of accessing a service on behalf of another. Say, you want to access AWS DynamoDB or S3 from EC2 instance. In this case, an IAM role with required permissions attached to EC2 will server the purpose.
IAM User is for users who need access to AWS services either through Console or through API (programmatic). AWS credentials are required to access the service.
In your case, MongoDB is installed on EC2 and your developer needs access to "the server on which MongoDB is installed" and is not required any access of "AWS EC2 Service".
As correctly pointed out in answer by #X-Men, IAM role or IAM user is not at all required. What required is, your developer to have the IP of server and credentials to login to that server. Username-password or username-key.
Restriction which you need on developer related to MongoDB are to be configured on MongoDB itself and not on EC2 level.

Storing/Retrieving kinit password from shellscript

I'm automating the provisioning of a VM in a keberized environment. After the new server is created it needs to join a network. For this, I need to login to the kerberos server using kinit and then use net ads join.
the challenge for me is where do I store the principal's password that I need to pass to the kinit and how do I retrieve it securely. Of course the requirement is that the automation program must be the only one that can retrieve the password from where ever it is stored.
Options I've considered so far:
1) I already know the option of storing the password in a vault(Hashicorp, Cyber Ark etc.,), but it takes too long to implement/manage and then it's expensive.
2) Store the encrypted password in another VM(within the same private network) in an environment variable and at runtime ssh into that VM and get the password, decrypt it, and then scp it over to the newly created VM.
Do any of the security experts here see issues with (2)? If yes, what are those?
What other options do exist, if any?
Thanks in advance

Kerberos authentication with expiring passwords

We are using Java Kerberos authentication to connect to our SQL Server DB from Linux. Here we had used the prinicipal name and the password to generate a keytab file on the Linux system. Currently the connectivity works fine.
But there has been an additional requirement to use expiring passwords, which expire every 3 months. In our other applications we use an API called CyberArk which retrieves the password from a vault and Ops team need not bother about changing the password on the application server located on the Linux system.
Does anyone have any experience on using Kerberos in such an enironment? We are basically looking at avoiding to regenerate the keytab file every time the password expires.
I don't think you can avoid to regenerate the keytab file in the event of password change or expiring. What you can do, however, is to make it painless to generate the keytab file on the Linux server. this require the Linux server joining the Active Directory, using RHEL native tool realm or Centrify software.
RHEL tool document is here https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/windows_integration_guide/realmd-domain
For Centrify user, https://community.centrify.com/t5/Centrify-Express/Replace-SSH-Keys-with-Kerberos-Keytabs/td-p/10112

Ranger user sync with kerberos

I have setup Kerberos following below document
http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_Ambari_Security_Guide/content/ch_configuring_amb_hdp_for_kerberos.html
Further, I would like to configure ranger to sync all Kerberos principals to create ACLS. There is an option to sync users from AD but I don't see any option to sync from Kerberos. See options in image below
Can anyone please help with options for doing this. Thanks
I'm not sure I understand what exactly you want to import, but I assume you have AD and local cluster KDC which are configured with trusted relations and you want all your principles to be represented in Ranger by standalone user accounts. If you have trusted relations configured that means that your entire list of principles would consist of both local KDC and AD and they all would be valid for authentication. But in ranger you work not with the actual Kerberos principles, but with the usernames, which are obtained from auth_to_local configuration setting according to the mapping rules specified there.
If you are running LDAP sync, it will pass all matching principles through the collection of rules in this configuration property and will create the end user account with names obtained after execution of these rules. You can check the end results using:
hadoop org.apache.hadoop.security.HadoopKerberosName principle#domain.com
For local KDC it really does not make sense to pass the principles from KDC through this mapping stage as at the end they all will be mapped to local UNIX accounts. That is why you can just point your group and passwd files in UNIX sync source and you can assume that all your local Kerberos principles would be represented in the Ranger database with proper user accounts.
You can find some more details about the aspects of Kerberos mappings in this article