Cannot store file content in hashicorp vault using Vault kv put - hashicorp-vault

I am trying to add file content in vault using vault kv put but I am unable to add data in vault
vault kv put -format=json -address ${VAULT_ADDR} key=#abc.json
Here the error says "Must supply data"
I also tried various other options like -
vault kv put -format=json -address ${VAULT_ADDR} key #abc.json
Here key is being added into vault address url e.g vault-address/key
&
vault kv put -format=json -address ${VAULT_ADDR} #abc.json
Here error says "Must supply data"
My Json file is sample test file and has following content in it
{
"key": "value",
"foo": "bar",
"bar": "baz"
}
Can someone please help me solving this issue?

You can directly create secret without -format=json. The below command worked for me.
vault kv put app/dev/test #test.json

Related

Is it possible to create a tls kubernetes secret using Azure Key Vault data resources in Terraform?

I have a certificate file and a private key file that I am using to implement tls encrypted traffic for several different k8s pods running under an NGINX ingress load balancer. This works fine (i.e. the web apps are visible and show as secure in a browser) if I create the kubernetes.io/tls secret in either of these ways:
Use kubectl: kubectl create secret my-tls-secret --key <path to key file> --cert <path to cert file>.
Reference those files locally in terraform:
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = file("${path.module}/certfile.cer"),
"tls.key" = file("${path.module}/keyfile.key")
}
}
However, neither of these methods are ideal because for #1, it turns my terraform plan/apply steps into 2-step processes and for #2, I don't want to commit the key file to source control for security reasons.
So, my question is: is there a way to do this by using some combination of Azure Key Vault data resources (i.e. keys, secrets or certificates)?
I have tried the following:
Copy/pasting the cert and key into key vault secrets (have also tried this with base64 encoding the values before pasting them into the key vault and using base64decode() around the tls.crt and tls.key values in the Terraform):
data "azurerm_key_vault_secret" "my_private_key" {
name = "my-private-key"
key_vault_id = data.azurerm_key_vault.mykv.id
}
data "azurerm_key_vault_secret" "my_certificate" {
name = "my-certificate"
key_vault_id = data.azurerm_key_vault.mykv.id
}
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = data.azurerm_key_vault_secret.my_certificate.value,
"tls.key" = data.azurerm_key_vault_secret.my_private_key.value
}
}
Tried importing the cert as an Azure key vault certificate and accessing its attributes like so:
data "azurerm_key_vault_certificate_data" "my_certificate_data" {
name = "my-certificate"
key_vault_id = data.azurerm_key_vault.mykv.id
}
resource "kubernetes_secret" "my_tls_secret" {
metadata {
name = "my-tls-secret"
}
type = "kubernetes.io/tls"
data = {
"tls.crt" = data.azurerm_key_vault_certificate_data.my_certificate_data.pem,
"tls.key" = data.azurerm_key_vault_certificate_data.my_certificate_data.key
}
}
which results in an error in the NGINX ingress log of:
[lua] certificate.lua:253: call(): failed to convert private key from PEM to DER: PEM_read_bio_PrivateKey() failed, context: ssl_certificate_by_lua*, client: xx.xx.xx.xx, server: 0.0.0.0:443
Both of these attempts resulted in failure and the sites ended up using the default/fake/acme kubernetes certificate and so are shown as insecure in a browser.
I could potentially store the files in a storage container and wrap my terraform commands in a script that pulls the cert/key from the storage container first and then use working method #2 from above, but I'm hoping there's a way I can avoid that that I am just missing. Any help would be greatly appreciated!
Method #1 from the original post works - the key point I was missing was how I was getting the cert/key into Azure KeyVault. As mentioned in the post, I was copy/pasting the text from the files into the web portal secret creation UI. Something was getting lost in translation doing it this way. The right way to do it is to use the Azure CLI, like so:
az keyvault secret set --vault-name <vault name> --name my-private-key --file <path to key file>
az keyvault secret set --vault-name <vault name> --name my-certificate --file <path to cert file>

HashiCorp Vault permission denied 403 for AppRole with assigned policy kv v2

I'm having troubles with Vault it returns permission denied 403 error, when I try to get secrets with my k8s AppRole.
I setup vault with kv version 2 engine.
Added policy for my AppRole:
Created secret under "dev/fra1/statement":
When I login with AppRole creds I have response with required policies:
When I try to execute get request with AppRole client_token I this error:
I tried different prefixes and so on (Since people on internet had problems with them).
But then was able to localize the problem, by performing that request with root token, so it went ok:
Now I'm our of ideas, I believe the only place where the problem can be is policy, what I'm doing wrong ?
Ok, so finally figured the right prefix our, it should be:
path "kv/data/dev/*" {
capabilities = ["read"]
}
Really, there is some hell with these prefixes in vault, they should describe it better in docs.
The "secret" prefix is used in v1 of Vault's KV API. v2 uses the mount name, which by default is "kv", but can be anything when you first create the mount for your KV secrets engine.
It is important to note that some tools which use Vault's API still use v1 of the KV API to access secrets, despite that your KV secrets engine may be v2. So you may need two different permissions in your policy.
I'm facing the same issue. I have a secrets engine called TestSecretsEngine and a single secret env. In my policy I add read to the path TestSecretsEngine/data/env to no avail. I'm using the node-vault npm module and it's failing at vault.approleLogin with a 403. It's got to be something with the policy because when I add a nonexistent path, I get a 404 instead.

Retrieval of secrets in Azure App Service from Hashicorp Vault using Managed Identity | Missing Role - Error

Hashicorp Vault is the native product of our organization and is a widely used and recommended approach for storing all the key-value pairs or any secrets. Any applications that are deployed on Azure too must store/retrieve the token from Hashicorp Vault and not from the Azure Key Vault. I provided this information just to add a bit of background to the requirement.
Now coming to the actual problem, I deployed the dotnet application on Azure App Service, enable the system-managed identity, and was able to successfully retrieve the JWT token.
As per the flow which I understood by reading the documentation, it says, first retrieve the application token deployed on Azure having System Managed Identity enabled. Once this is done, pass this token for validation to Vault which gets it validated using OIDC from AAD. On successful validation, I will be given back the Vault token which can be used to fetch the secrets from Vault.
To perform these steps configuration is required at the Vault side, for which, I performed all the below steps on the vault server installed on my windows local machine:-
Command line operation
Start the Vault server
Open the other command prompt and set the environment variables set
VAULT_ADDR=http://127.0.0.1:8200 set
VAULT_TOKEN=s.iDdVbLKPCzmqF2z0RiXPMxLk
vault auth enable jwt
vault write auth/jwt/config
oidc_discovery_url=https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/
bound_issuer=https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/
vault read auth/jwt/config
Policy associated with the sqlconnection:-
create a role (webapp-role) by using the command
curl --header “X-Vault-Token: %VAULT_TOKEN%” --insecure --request POST
--data #C:\Users\48013\source\repos\HashVaultAzure\Vault-files\payload.json
%VAULT_ADDR%/v1/auth/jwt/role/webapp-role
–payload.json { “bound_audiences”: “https://management.azure.com/”,
“bound_claims”: { “idp”:
“https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/”,
“oid”: “8d2b99fb-f4f4-4afb-9ee3-276891f40a65”, “tid”:
“4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/” }, “bound_subject”:
“8d2b99fb-f4f4-4afb-9ee3-276891f40a65”, “claim_mappings”: { “appid”:
“application_id”, “xms_mirid”: “resource_id” }, “policies”:
[“sqlconnection”], “role_type”: “jwt”, “token_bound_cidrs”:
[“10.0.0.0/16”], “token_max_ttl”: “24h”, “user_claim”: “sub” }
Vault read auth/jwt/role/webapp-role
Run the command below with the JWT token retrieved from the application (having the managed identity enabled) deployed on Azure
AAD and pass it as “your_jwt”. This command should return the vault
token as shown in the link https://www.vaultproject.io/docs/auth/jwt
curl --request POST --data '{"jwt": "your_jwt", "role":
"webapp-role"}' http://127.0.0.1:8200/v1/auth/jwt/login
At this point I receive an error – “Missing Role”,
I am stuck here and not able to find any solution.
Expected response should be a vault token/client_token as shown:-
JWT Token decoded information
{
"aud": "https://management.azure.com",
"iss": "https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/",
"iat": 1631172032,
"nbf": 1631172032,
"exp": 1631258732,
"aio": "E2ZgYNBN4JVfle92Tsl1b8m8pc9jAA==",
"appid": "cf5c734c-a4fd-4d85-8049-53de46db4ec0",
"appidacr": "2",
"idp": "https://sts.windows.net/4a95f16f-35ba-4a52-9cb3-7f300cdc0c60/",
"oid": "8d2b99fb-f4f4-4afb-9ee3-276891f40a65",
"rh": "0.AVMAb_GVSro1Ukqcs38wDNwMYExzXM_9pIVNgElT3kbbTsBTAAA.",
"sub": "8d2b99fb-f4f4-4afb-9ee3-276891f40a65",
"tid": "4a95f16f-35ba-4a52-9cb3-7f300cdc0c60",
"uti": "LDjkUZdlKUS4paEleUUFAA",
"ver": "1.0",
"xms_mirid": "/subscriptions/0edeaa4a-d371-4fa8-acbd-3675861b0ac8/resourcegroups/AzureAADResource/providers/Microsoft.Web/sites/hashvault-test",
"xms_tcdt": "1600006540"
}
The issue was with the missing configuration both at the Azure Cloud and Vault side.
These were the addition steps done further to make it work.
Create an Azure SPN (which is equal to creating an app registration with client secret)
az ad sp create-for-rbac --name "Hashicorp Vault Prod AzureSPN"
--skip-assignment Assign as Reader on subscription
Create Vault config
vault auth enable azure vault write auth/jwt/config
tenant_id=lg240e12-76g1-748b-cd9c-je6f29562476
resource=https://management.azure.com/ client_id=34906a49-
9a8f-462b-9d68-33ae40hgf8ug client_secret=123456ABCDEF

Hashicorp Vault cli return 403 when trying to use kv

I set up vault backed by a consul cluster. I secured it with https and am trying to use the cli on a separate machine to get and set secrets in the kv engine. I am using version 1.0.2 of both the CLI and Vault server.
I have logged in with the root token so I should have access to everything. I have also set my VAULT_ADDR appropriately.
Here is my request:
vault kv put secret/my-secret my-value=yea
Here is the response:
Error making API request.
URL: GET https://{my-vault-address}/v1/sys/internal/ui/mounts/secret/my-secret
Code: 403. Errors:
* preflight capability check returned 403, please ensure client's policies grant access to path "secret/my-secret/"
I don't understand what is happening here. I am able to set and read secrets in the kv engine no problem from the vault ui. What am I missing?
This was a result of me not reading documentation.
The request was failing because there was no secret engine mounted at that path.
You can check your secret engine paths by running vault secrets list -detailed
This showed that my kv secret engine was mapped to path kv not secret as I was trying.
Therefore running vault kv put kv/my-secret my-value=yea worked as expected.
You can enable secret engine for specific path
vault secrets enable -path=kv kv
https://www.vaultproject.io/intro/getting-started/secrets-engines
You need to update secret/my-secret to whichever path you mounted when you enable the kv secret engine.
For example, if you enable the secret engine like this:
vault secrets enable -version=2 kv-v2
You should mount to kv-v2 instead of secret
vault kv put kv-v2/my-secret my-value=yea

Vault: Get key value secrets

I've created this secret backend:
$ vault secrets enable -path=openshift kv
$ vault write openshift/postgresql username=tdevhub
$ vault write openshift/postgresql password=password
I don't quite figure out how to read username and password values.
I've tried with:
$ vault read openshift/postgresql/password
or
$ vault kv get openshift/post...
By other hand, when I perform this command line:
$ vault kv get openshift/postgresql
====== Data ======
Key Value
--- -----
username tdevhub
I'd like to store username and password into a secret backend. I've realized that a kv secret backend only is able to store one key... is it right?
How could I get my goal?
you can read secrets using
vault kv get -field=password openshift/postgresql
or
vault kv get -field=username openshift/postgresql
You can store multiple data with a command like this vault write openshift/postgresql username=tdevhub password=password. When you will read at that location both username and password values will be returned.
Unfortunately, you can't append data to the same location, so when you execute the write again on that path the previous values will be overwritten. If you want to append data later, you have two choice:
Read your data each time you need to add a value, and append it manually
Use the KV Version 2 of Vault Key/Value secret engine
You can use the flags -format json and -field=data to get all the data under a given key properly formatted in JSON. This also filters out the extra details.
vault kv get -format json -field=data openshift/postgresql
The output you would get is -
{
"username": "tdevhub",
"password": "password"
}