How to read public and private keys from key storage inside a job? - rundeck

I have three differents keys inside my Key Storage : private, public, and password.
When editing a job and adding a parameter ("option") and marks it as secure, I can navigate through the key storage to select the password key.
But I can't see nor use the private and public keys.
Writing the full path doesn't work as Rundeck says on job execution that the key "couldn't be read", even if the path is the same shown on key storage page.
Can you tell me how is it possible to add these keys in my job ?

By design and for security reasons isn't possible to use private/public keys on options. Alternatively, you can create a plugin to retrieve those keys. Here you can see an example to get keys using a workflow step custom plugin.
Here a general plugin development guide. And here are some examples.

Related

Azure Data Factory File encryption using Public Key

I have a situation where I have been supplied with a public key. I can encrypt a file using command line gpg/pgp. However I want to use ADF to save the file to a blob store in it's encrypted form using the customer-managed public key. I cannot do it by importing the private key pair into a key vault and using that key vault to encrypt the storage container - as I don't have the private key pair (it is not visible within the system which receives the encrypted file).
Is there a way to do this in ADF? I have seen one or two articles which use python scripts to decrypt a file in ADF, but not one to encrypt a file. Thanks for any help.
Encrypt Azure Data Factory with customer-managed keys feature is to encrypt the data factory environment i.e., to encrypt data that datafactory storing in the system. Unfortunately there is no out of box feature in Azure Data factory to perform encryption/decryption of files.
Though you can encrypt the data in Storage account and also in ADF separately using customer-managed public key.
I have repro the same and it is working fine for me.
Go the storage account and click on Encryption on the left side of the panel.
Select the key vault and the key which you want to encrypt the data.
In ADF also, go to the Manager option on the left panel and click on Customer managed key and add Key URL to encrypt the ADF environment and data associated with it.
Note: A customer-managed key can only be configured on an empty data Factory. The data factory can't contain any resources such as linked services, pipelines and data flows. It is recommended to enable customer-managed key right after factory creation.

Is exposing Pulumi ecnryptedKey safe?

I have a stack that utilizes AWS KMS key for, I believe, secrets and state encryption in Pulumi stack configuration file Pulumi..yaml
Is it safe to expose this in a public repository? As I understand secrets are stored within stack configuration files as well in encrypted form; would it be reasonably safe to expose those as well in a public repository?
How exactly this key is generated and what are the inner mechanics behind secrets management in Pulumi?
Yes, exposing these values in your code is complete safe.
The key is asymmetrically encrypted using your key provider, in this case an AWS KMS key, its only possible to retrieve the value if someone has access to the AWS KMS key itself to decrypt the value, and even then, is a bit of a hoop jumping exercise.
I expose these values myself in source control, so you should be absolutely okay to leave them in your repo

Using Mirth Connect Destination Mappings for AWS Access Key Id results in Error

We use vault to store our credentials, I've successfully grabbed S3 Access key ID and Secret Access key using the vault API, and used channelMap.put to create mappings: ${access_key} and ${secret_key}.
aws_s3_file_writer
However when I use these in the S3 file writer I get the error:
"The AWS Access Key Id you provided does not exist in our records."
I know the Access Key Id is valid, it works if I plug it in directly in the S3 file writer destination.
I'd appreciate any help on this. thank you.
UPDATE: I had to convert the results to a string, that fixed it.
You can try using the variable to a higher map. You can use globalChannelMap, globalMap or configurationMap. I would use this last one since it can store password not in plain text mode. You are currently using a channelMap, it scope is only applied to the current message while it is traveling through the channel.
You can check more about variable maps and their scopes in Mirth User guide, Section Variable Maps, page 393. I think that part of the manual is really important to understand.
See my comment, it was a race condition between Vault, Mirth and AWS.

AWS KMS storing customer master key

I know I'm missing something here but I'm struggling to understand the customer master key concept in AWS KMS. Below is the sample code.
Code to create master key:
`CreateKeyRequest req = new CreateKeyRequest();
CreateKeyResult result = kmsClient.createKey(req);
String customerMasterKey = result.getKeyMetadata().getKeyId();`
Code to create data key using customer master key:
`GenerateDataKeyRequest dataKeyRequest = new GenerateDataKeyRequest();
dataKeyRequest.setKeyId(customerMasterKey);
dataKeyRequest.setKeySpec("AES_128");
GenerateDataKeyResult dataKeyResult = kmsClient.generateDataKey(dataKeyRequest);`
Now as per my understanding, I need to use the master key to decrypt the encrypted data key every time I want to encrypt/decrypt some thing. Which means I need to store these two keys in some location. So if someone else can get access to these two keys, can they be able to decrypt my data using AWS encryption SDK?
The master key never leaves AWS and is only accessible by someone with the appropriate access to your account and the key. If they have access to your account and with the appropriate rights to use the key then they can use the master key to encrypt/decrypt your data key. Remember the master key ID is not the actual key, therefore, being in possession of the key ID is not useful outside of the AWS.
You do not store both keys, the master key ID will always be viewable using the console, CLI or SDK(I assume since I have not used it).
The data key is not managed by the KMS service, therefore, you'll have to store it(after encrypting it with the master key) along with the encrypted data.
The answer to your question is... if it happens that an unauthorised individual has a copy of your master key ID and your encrypted data key, there's no way they can use that master key unless they also have access to your AWS user credentials with the appropriate rights to use that master key.

Private Key Template Inconsistent

I am attempting to generate an RSA key pair inside a SafeNet HSM. I copied the example templates specified in PKCS11 for the private and public keys. When i generate the key pair everything works fine. However, when i specify for the private key the following attribute values, the C_GenerateKeyPair returns CKR_TEMPLATE_INCONSISTENT:
CKA_DECRYPT = false.
CKA_UNWRAP = true.
I can imagine why i get template inconsistent but i just want to verify it. Since the unwrap operation is in it's essence a decrypt operation, then it is not consistent to allow a key to unwrap while it cannot decrypt.
However, shouldn't these two operations be treated separately by PKCS11 implementations?
Thanks in advance.
You should not have to set both of them, they are indeed separate. In fact, there exists in recent versions of the Gemalto SafeNet HSMs a partition policy that has to be enabled before so-called "multi-purpose keys" are even allowed. I think the inconsistency is not within the private key template, but rather between it and the corresponding public key template. You probably have to set the flags to the opposite values in the public key template.