Hashicorp vault - export key from one vault, import into another vault - hashicorp-vault

I'd like to export a key from one vault, and import it into another vault.
It feels there should be an easy way to do this from the command line, but I don't see an abstract simple way to do it, to fully export, then import a key.
Is there anyway to do this? I would prefer command line solutions, using the vault script.

The only way to do that is by chaining two vault commands, which is effectively reading the value out of the first vault and then writing it to the second one. For example:
export VAULT_TOKEN=valid-token-for1
export VAULT_ADDR=https://vault1
JSON_DATA=$(vault kv get -format json -field data secret/foo)
export VAULT_TOKEN=valid-token-for2
export VAULT_ADDR=https://vault2
echo $JSON_DATA | vault kv put secret/foo -

We are developing a open source cli tool that does exactly what you need.
The tool can handle one secret or a full tree structure in both import and export. It also supports end to end encryption of your secrets between export and import between Vault instances.
https://github.com/jonasvinther/medusa
export VAULT_ADDR=https://192.168.86.41:8201
export VAULT_SKIP_VERIFY=true
export VAULT_TOKEN=00000000-0000-0000-0000-000000000000
./medusa export kv/path/to/secret --format="yaml" --output="my-secrets.txt"
./medusa import kv/path/to/new/secret ./my-secrets.txt

The only way to export data from one vault to another is by doing it individually for every key (and every path). I've written a small bash script to automate this for all keys in a given path.
This script polls the data for each key (for the given path) from the source vault and inserts it into the destination vault.
You need to provide the vault url, token and CA certificate (for https authentication) for source and destination vaults & also the path (which has the keys) in the script below -
#! /usr/bin/env bash
source_vault_url="<source-vault-url>"
source_vault_token="<source_vault_token>"
source_vault_cert_path="<source_vault_cert_path>"
destination_vault_url="<destination_vault_url>"
destination_vault_token="<destination_vault_token>"
destination_vault_cert_path="<destination_vault_cert_path>"
# secret_path is the path from which the keys are to be exported from source vault to destination vault
secret_path="<path-without-slash>"
function _set_source_vault_env_variables() {
export VAULT_ADDR=${source_vault_url}
export VAULT_TOKEN=${source_vault_token}
export VAULT_CACERT=${source_vault_cert_path}
}
function _set_destination_vault_env_variables() {
export VAULT_ADDR=${destination_vault_url}
export VAULT_TOKEN=${destination_vault_token}
export VAULT_CACERT=${destination_vault_cert_path}
}
_set_destination_vault_env_variables
printf "Enabling kv-v2 secret at the path ${secret_path} in the destination vault -\n"
vault secrets enable -path=${secret_path}/ kv-v2 || true
_set_source_vault_env_variables
# getting all the keys in the given path from source vault
keys=$(vault kv list ${secret_path}/ | sed '1,2d')
# iterating though each key in source vault (in the given path) and inserting the same into destination vault
printf "Exporting keys from source vault ${source_vault_url} at path ${secret_path}/ ... \n"
for key in ${keys}
do
_set_source_vault_env_variables
key_data_json=$(vault kv get -format json -field data ${secret_path}/${key})
printf "${key} ${key_data_json}\n"
_set_destination_vault_env_variables
echo ${key_data_json} | vault kv put ${secret_path}/${key} -
done
printf "Export Complete!\n"
# listing all the keys (in the given path) in the destination vault
printf "Keys in the destination vault ${destination_vault_url} at path ${secret_path}/ -\n"
vault kv list ${secret_path}

Related

How to read multiline Azure Key Vault secret in Azure Devops Release pipeline

I'm having trouble using a multiline Azure Key Vault value inside an Azure Release Pipeline...
I put a multiline value (RSA private key) into Azure Key Vault using the CLI:
az keyvault secret set --vault-name "vault" --name "secret" --file "pk.pem"
This works and I can see the multiline secret in the portal.
Locally using CLI I can also do:
pk=$(az keyvault secret show \
--name "ssh-private-key" \
--vault-name $vault \
--query "value")
This returns a somewhat crappy value (yes including the double quotes):
"-----BEGIN RSA PRIVATE KEY-----\nMIIG4wIBAA .... JtpyW\n-----END RSA PRIVATE KEY-----\n"
I can manage to work with this and send the value to a file like so:
pk="${pk%\"}" #remove first quote
pk="${pk#\"}" #remove last quote
echo $pk | sed 's|\\n|\n|g' | # replace with actual newlines
while IFS= read -r line; do # loop through lines
echo "$line" >> pk.pem # write to file per line
done
This works and I can login to my server using ssh -i pk.pem user#server
But when running the same script in the Azure Devops Release pipeline (also using Bash on a Linux agent) the exact same script fails... I'm also having trouble inspecting the actual value as the log masks all values related to the secret...
Any guide on how to debug or work with actually reading multiline values instead of just storing them would be hugely appreciated!
Here is a troubleshooting advice:
The error "Host key verification failed." doesn't just occur when the key is incorrect. Most of the time, it doesn't refer to your key.
So I recommend you firstly try the connection with a simple value to see if it works on Azure DevOps.
What's more, maybe an SSH service connection can help you with what you're doing. Go to Project Settings -> Service connections -> Create service connection -> SSH to create one.

Store data in a file and get reference path to it

I wish to create an environment file not "variable" and get a path to it in the TravisCI pipeline.
Attached is the image of how we do the same in gitlab
gitlab environment file image
I need to store secrets in a file refer is via a path in travisci pipeline.
Ex: this is how we can do the same in Jenkins:
"KUBECONFIG=/var/lib/jenkins/.kube/filename"
I am not will to upload my secrets file to github private repo.
The encrypt-file command will encrypt an entire file using symmetric (AES-256) encryption and stores the secret in a file. Let us create a file called secret.txt and add the following entries into it: SECRET_VALUE=ABCDE12345 CLIENT_ID=rocky123 CLIENT_SECRET=abc222222!
travis encrypt-file secret.txt -> give this command after creating secret.txt file and it will store result as secret.txt.enc and also shows ->add the following to your build script (before_install stage in your .travis.yml , for instance): - openssl aes-256-cbc -K $encrypted_74945c17fbe2_key -iv $encrypted_74945c17fbe2_iv -in secret.txt.enc -out secret.txt -d
Now add this entry into our .travis.yml script: ( before_install: - openssl aes-256-cbc -K $encrypted_74945c17fbe2_key -iv $encrypted_74945c17fbe2_iv -in secret.txt.enc -out secret.txt -d ) , It can then decrypt values in the secret text file for us
So it is to create a file and use command travis encrypt-file secret.txt, it will then produces an entry, copy that entry and add it into our .travis.yml file in before_install stage
make sure to add the secret.txt.enc to the git repository and make sure NOT to add the secret.txt to the git repository
Generally, we cannot keep both the encryption key and encrypted file in the same place(i.e repo). So, we store the file somewhere else. Where are you storing it? How will you fetch it?

store and retrieve files from hashicorp vault

I can't figure out how to store files in hashicorp vault. Our use case for a PoC is to store a SSL cert at a certain path and then download it via the HTTP API.
I tried using the kv secrets engine which seems the most appropriate.
It seems that you can specify a file with data in it to store as the value for a key in HashiCorp vault.
You can use
vault write <path> -value=#file to write the contents of file to the key specified in path.
So if you want to store the contents of a crt you can do:
vault write secret/ssl-certs/prod-1 -value=#ssl-cert.crt
One thing to keep in mind is that you're not saving the file but the contents of the file.
So Vault's Default offering doesn't have this baked in, but there's a Desktop GUI program that add's this functionality in a user friendly way.
https://github.com/adobe/cryptr
I did run into a bit of confusion when using it: If you have a KVv2, the HC Web UI, and the Cryptr Desktop GUI will use different conventions.
When writing Vault policies you'd use /KVv2/data/path/
When using Cryptr you'd use /KVv2/data/path/
When using HC WebUI you'd use /kvv2/path/
Fact: You can utilize base64 encoding to store raw binary files in any KV store. Thus you can use the technique to store in Hashicorp Vault as well.
So base64 encoding is a reversible function that allows you to take any binary file, convert it to a 1 line string, then take the generated 1 line string and convert it back to any binary file. And since you can store a 1 line string in any KV store, you can store arbitrary binary files in any KV store! :) (*)
Here's some code to do what you're asking:
CMD:\> vault server -dev
WindowsSubsystemForLinuxBash:/mnt/c# curl -L https://releases.hashicorp.com/vault/1.0.2/vault_1.0.2_linux_amd64.zip > vault.zip
Bash# apt-get update
Bash# apt-get install unzip
Bash# unzip vault.zip -d /bin
Bash# chmod +x /bin/vault
Bash# export VAULT_ADDR=http://127.0.0.1:8200
Bash# vault login s.aO8ustaAV4Ot1OxzBe94vi3J
Bash# cat excelfile.xlsx | md5sum
fb6b4eaa2be1c8c410645a5f0819539e -
Bash# cat excelfile.xlsx | base64 | base64 --decode > x.xlsx
Bash# cat x.xlsx | md5sum
fb6b4eaa2be1c8c410645a5f0819539e -
Bash:/mnt/c# cat excelfile.xlsx | base64 | vault kv put secret/excelfile.xlsx base64dfile=-
(=- means assign value from standard in, which in this case is the piped output of the cat file command)
Chrome: localhost:8200
(login with dev root token, and you'll see the value is characters in a 1 line string)
Bash# rm excelfile.xlsx
Bash# vault kv get -field=base64dfile secret/excelfile.xlsx | tr -d '\n' | base64 --decode > excelfile.xlsx
(or)
Bash# vault kv get -field=base64dfile secret/excelfile.xlsx | sed 's/\r//' | base64 --decode > excelfile.xlsx
Bash# cat excelfile.xlsx | md5sum
fb6b4eaa2be1c8c410645a5f0819539e -
(*Note Vault and other KV stores often have file size limits, Vault with Consul backend would have a secret file size limit of around ~375kb since base64 encoding will bloat the file size by 4/3rds bringing the size to 500kb and Consul has a Key Value pair limit of 0.5mb ish.) (Note for perspective that's plenty of space as cert files can be ~8KB/if it's larger than 375kb it's probably not a secret.)
Lets say down the road you need to store bigger secrets:
(Such as Kubernetes etcd snapshot)
Since Vault went 1.0, there's built in functionality to migrate your storage backend, so you could switch from "Consul Storage Backend" to "Hybrid Storage Backend of AWS S3 Storage with Consul" (Consuls still needed for HA consistency locking in multi server setups)" to have a bigger limit. Picking a different storage backend will give you a bigger KV size limit. Note Vault probably imposes a sensible limit like 10mb though. Because even if you had a Vault Backend that supported 1TB Key Value sizes, you definitely would want to think twice about storing large files in vault because the base64 process will add computing overhead as well as bloat the files by 4/3rds so a 300mb file would take up 400mb of space once base64'd. (That being said it could make since for the sake of consistency, consistency is good for automation and maintainability, and compute/storage resources.)
Here's how I'd use Vault if I needed to support large secrets:
I'd write a wrapper python script to get and fetch secrets from vault, and I'd have 3 scenarios, 2 reserved keywords, and the following naming convention/logic:
For secrets > 375kb secret/filename bigfile:json containing a symmetric encryption key and the location of an encrypted file stored in a spot designed to store large files.
Wrapper script would recognize "bigfile" as a reserved keyword, and execute logic to parse json, download encrypted file from file store (Torrent/TFP server/CephFS Path/Azure Blob/AWS S3/GCP Cloud Storage), and decrypt the file a for me to my current context.
For secret binary files < 375kb secret/filename base64dfile:1 line string of characters representing the base64 encoded version of the binary file
Wrapper script would recognize "base64dfile" as a reserved keyword, and execute logic to unbase64 it and convert to a file upon fetching it.
For text files (.json with secrets, .yamls with secrets, .pem certs, etc) < 375 secret/filename filename:filecontents as multiline strings are allowed
while loading kv pairs in vault you can also load using a json file with one of the keys as the cert.
Below is a sample cert generated at https://www.digicert.com/order/sample-csr.php
-----BEGIN CERTIFICATE REQUEST-----
MIICvDCCAaQCAQAwdzELMAkGA1UEBhMCVVMxDTALBgNVBAgMBFV0YWgxDzANBgNV
BAcMBkxpbmRvbjEWMBQGA1UECgwNRGlnaUNlcnQgSW5jLjERMA8GA1UECwwIRGln
aUNlcnQxHTAbBgNVBAMMFGV4YW1wbGUuZGlnaWNlcnQuY29tMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEA8+To7d+2kPWeBv/orU3LVbJwDrSQbeKamCmo
wp5bqDxIwV20zqRb7APUOKYoVEFFOEQs6T6gImnIolhbiH6m4zgZ/CPvWBOkZc+c
1Po2EmvBz+AD5sBdT5kzGQA6NbWyZGldxRthNLOs1efOhdnWFuhI162qmcflgpiI
WDuwq4C9f+YkeJhNn9dF5+owm8cOQmDrV8NNdiTqin8q3qYAHHJRW28glJUCZkTZ
wIaSR6crBQ8TbYNE0dc+Caa3DOIkz1EOsHWzTx+n0zKfqcbgXi4DJx+C1bjptYPR
BPZL8DAeWuA8ebudVT44yEp82G96/Ggcf7F33xMxe0yc+Xa6owIDAQABoAAwDQYJ
KoZIhvcNAQEFBQADggEBAB0kcrFccSmFDmxox0Ne01UIqSsDqHgL+XmHTXJwre6D
hJSZwbvEtOK0G3+dr4Fs11WuUNt5qcLsx5a8uk4G6AKHMzuhLsJ7XZjgmQXGECpY
Q4mC3yT3ZoCGpIXbw+iP3lmEEXgaQL0Tx5LFl/okKbKYwIqNiyKWOMj7ZR/wxWg/
ZDGRs55xuoeLDJ/ZRFf9bI+IaCUd1YrfYcHIl3G87Av+r49YVwqRDT0VDV7uLgqn
29XI1PpVUNCPQGn9p/eX6Qo7vpDaPybRtA2R7XLKjQaF9oXWeCUqy1hvJac9QFO2
97Ob1alpHPoZ7mWiEuJwjBPii6a9M9G30nUo39lBi1w=
-----END CERTIFICATE REQUEST-----
In order to store the above cert and saving it as a key value pair in a json file the newlines will have to be replaced with \n to save it as a single continuous string
Below is the content of the json file (with the same cert saved as the value )
vault_certfile_kv_stackoverflow.json
{
"sample.ssl.public.cert":"-----BEGIN CERTIFICATE REQUEST-----\nMIICvDCCAaQCAQAwdzELMAkGA1UEBhMCVVMxDTALBgNVBAgMBFV0YWgxDzANBgNV\nBAcMBkxpbmRvbjEWMBQGA1UECgwNRGlnaUNlcnQgSW5jLjERMA8GA1UECwwIRGln\naUNlcnQxHTAbBgNVBAMMFGV4YW1wbGUuZGlnaWNlcnQuY29tMIIBIjANBgkqhkiG\n9w0BAQEFAAOCAQ8AMIIBCgKCAQEA8+To7d+2kPWeBv/orU3LVbJwDrSQbeKamCmo\nwp5bqDxIwV20zqRb7APUOKYoVEFFOEQs6T6gImnIolhbiH6m4zgZ/CPvWBOkZc+c\n1Po2EmvBz+AD5sBdT5kzGQA6NbWyZGldxRthNLOs1efOhdnWFuhI162qmcflgpiI\nWDuwq4C9f+YkeJhNn9dF5+owm8cOQmDrV8NNdiTqin8q3qYAHHJRW28glJUCZkTZ\nwIaSR6crBQ8TbYNE0dc+Caa3DOIkz1EOsHWzTx+n0zKfqcbgXi4DJx+C1bjptYPR\nBPZL8DAeWuA8ebudVT44yEp82G96/Ggcf7F33xMxe0yc+Xa6owIDAQABoAAwDQYJ\nKoZIhvcNAQEFBQADggEBAB0kcrFccSmFDmxox0Ne01UIqSsDqHgL+XmHTXJwre6D\nhJSZwbvEtOK0G3+dr4Fs11WuUNt5qcLsx5a8uk4G6AKHMzuhLsJ7XZjgmQXGECpY\nQ4mC3yT3ZoCGpIXbw+iP3lmEEXgaQL0Tx5LFl/okKbKYwIqNiyKWOMj7ZR/wxWg/\nZDGRs55xuoeLDJ/ZRFf9bI+IaCUd1YrfYcHIl3G87Av+r49YVwqRDT0VDV7uLgqn\n29XI1PpVUNCPQGn9p/eX6Qo7vpDaPybRtA2R7XLKjQaF9oXWeCUqy1hvJac9QFO2\n97Ob1alpHPoZ7mWiEuJwjBPii6a9M9G30nUo39lBi1w=\n-----END CERTIFICATE REQUEST-----"
}
finally here is how to upload this json file
vault write --address=https://<vaultdomain> secret/<path> #vault_certfile_kv_stackoverflow.json
If someone is looking for an answer to push the file using the API call with a curl command:
I used the following configuration to push a yaml file to Hashicorp Vault. So, it should be similar for a crt file or any other non-json file
For pushing the yaml file directly:
curl -k -H 'X-Vault-Token: <vault_token>' -X POST --data #test.yaml https:///v1/secret/foo/bar"
If you would like to encode the file before pushing to Vault:
base64 test.yaml | curl -k -H "X-Vault-Token: <vault_token>" -X POST --data #- https://<vault_host>/v1/secret/foo/bar
In the above command, the output of base64 will be passed on to the '#-'. The '-' stands for take your value from stdin and this case it is the bash output.
For testing if the secret got pushed:
curl -s -k -H 'X-Vault-Token: <vault_token>' https://<vault_host>/v1/secret/foo/bar

adding SSL automation task to pipeline

I've created a power shell script that sets the SSL based on a provided PFX file.
Using the VSTS pipeline, what is the recommended way of passing PFX file to the script?
Including PFX file in a solution
getting the PFX file path on a target environment (contains dependency,
assuming that PFX file is already placed on target environment)
any other solution...?
The common way to pass authentication to the script is using option 1 (Including PFX file in a solution) as you listed.
After adding the pfx file into your solution, you can import certificates and private keys by import-PfxCertificate.
Detail usage and examples of Import-PfxCertificate, you can refer this document.

GCS - multiple credentials in a single boto file

New to GCS (just got started with it today). Looks very promising.
Is there anyway to use multiple S3 accounts (or GCS) in a single boto file? I only see the option to assign keys to one S3 and one GCS account in a single file. I'd like to use multiple credentials.
We're like to copy from S3 to S3, or GCS to GCS, with each of those buckets using different keys.
You should be able to setup multiple profiles within your .boto file.
You could add something like:
[profile prod]
gs_access_key_id=....
gs_secret_access_key=....
[profile dev]
gs_access_key_id=....
gs_secret_access_key=....
And then from your code you can add a profile_name= parameter to the connection call:
con = boto.gs.connection(profile_name="dev")
You can definitely use multiple boto files, just make sure that the credentials in each of them are valid. Every time you need to switch between them, run the following command with the right path.
$ BOTO_CONFIG=/path/to_boto gsutil cp SOME_FILE gs://bucket
Example :
BOTO_CONFIG=/etc/boto.cfg gsutil -m cp text.txt gs://bucket
Additionally, you can have aliases for your different profiles. Just create an alias for each command and you are set !