Hashicorp Vault - 403 when making api call - hashicorp-vault

I've got a (possible) strange behavior when trying to get secrets from the vault.
Setup:
Vault 1.2.2
Very basic KV secret
Token with the associated policy associated that allows reading this secret.
I successfully can read that secret using vault agent:
root#us-border-proxy# env | grep VAULT
VAULT_TOKEN=BLABLA
VAULT_CACERT=./vault-ca.crt
VAULT_ADDR=https://1.1.1.1:8200
root#us-border-proxy# vault kv get secret/example
=== Data ===
Key Value
--- -----
key SECRETPASSWORD
But the problem starts when I trying to do the same using vault API - I just got 403:
root#us-border-proxy# curl -k -H "X-Vault-Token: BLABLA" -X GET https://1.1.1.1:8200/v1/secret/data/example
{"errors":["1 error occurred:\n\t* permission denied\n\n"]}
What do I miss?

Got your error
When you are listing from CLI, the path you mentioned is secret/example
root#us-border-proxy# vault kv get secret/example
=== Data ===
Key Value
--- -----
key SECRETPASSWORD
But while the path in the curl command is secret/data/example
curl -k -H "X-Vault-Token: BLABLA" -X GET https://1.1.1.1:8200/v1/secret/data/example
So changing to secret/example should work.

Related

How to retrieve cluster-keys in vault if it lost?

I have generated the root token during the initialization of the vault using following command.
$ kubectl exec vault-0 -- vault operator init \
-key-shares=1 \
-key-threshold=1 \
-format=json > cluster-keys.json
However, I have lost the file cluster-keys.json.
Is it possible to get the cluster-keys.json content again without re-initializing?

Howto create azure postgres server with admin-password in keyvault?

To make parameters using key vaults available for my azure webapp I've executed the following
identity=`az webapp identity assign \
--name $(appName) \
--resource-group $(appResourceGroupName) \
--query principalId -o tsv`
az keyvault set-policy \
--name $(keyVaultName) \
--secret-permissions get \
--object-id $identity
Now I want to create an azure postgres server taking admin-password from a key vault:
az postgres server create \
--location $(location) \
--resource-group $(ResourceGroupName) \
--name $(PostgresServerName) \
--admin-user $(AdminUserName) \
--admin-password '$(AdminPassWord)' \
--sku-name $(pgSkuName)
If the value of my AdminPassWord is here something like
#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)
I need the single quotes (like above) to get the postgres server created. But does this mean that the password will be the whole string '#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)' instead of the secret stored in <myKv> ?
When running my pipeline without the quotes (i.e. just --admin-password $(AdminPassWord) \) I got the error message syntax error near unexpected token ('. I thought that it could be consequence of the fact that I have't set the policy --secret-permissions get for the resource postgres server. But how can I set it before creating the postgres server ?
The expresssion #Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/) is used to access the keyvault secret value in azure web app, when you configure it with the first two commands, the managed identity of the web app will be able to access the keyvault secret.
But if you want to create an azure postgres server with the password, you need to obtain the secret value firstly and use it rather than use the expression.
For Azure CLI, you could use az keyvault secret show, then pass the secret to the parameter --admin-password in az postgres server create.
az keyvault secret show [--id]
[--name]
[--query-examples]
[--subscription]
[--vault-name]
[--version]

Openshift 4.x Session Token Retrieval Using REST API Calls

I have a use case that requires the retrieval of 4.x openshift session tokens. This shell command for the 3.11 endpoint works as fine:
export TOKEN=$(curl -u user1:test#123 -kI 'https://myose01:8443/oauth/authorize?clientid=openshift-challenging-client&response_type=token' | grep -oP "access_token=\K[^&]*")
However, Openshift 4.4 seem to have different endpoints and I'm having trouble reproducing the same result. Anyone know what the 4.4 equivalent is?
Using the openshift cli is not an option
First get your Endpoints with this command:
oc get --raw '/.well-known/oauth-authorization-server'
You are looking for: authorization_endpoint
Then add this Header to your request:
-H "X-CSRF-Token: 100"
So if you run:
curl -u user1:test#123 'https://authorization_endpoint_URL/oauth/authorize?clientid=openshift-challenging-client&response_type=token' -kI -H "X-CSRF-Token: 100" | grep -oP "access_token=\K[^&]*"
you will get your Token.

Getting a 403 - permission denied when using aws client to set the acl to public-read

I am attempting to change the asl of a file (100KB.file) I have within IBM COS: bucket: 'devtest1.ctl-internal.nasv.cos' and am receiving the following message:
An error occurred (AccessDenied) when calling the PutObjectAcl
operation: Access Denied
It seems like my AWS credentials (or call) do not have the correct permissions to allow the ACL update.
Command:
aws --endpoint-url=https://s3.us-south.objectstorage.softlayer.net
s3api put-object-acl --bucket devtest1.ctl-internal.nasv.cos --key
100KB.file --acl public-read
Return:
An error occurred (AccessDenied) when calling the PutObjectAcl
operation: Access Denied
You haven’t mentioned that you have configured hmac credentials on your bucket, so I’ll assume you haven't. I'm also assuming that operations other than PutObjectAcl do not work for you.
Try adding hmac credentials:
Then ...
Source: https://console.bluemix.net/docs/services/cloud-object-storage/hmac/credentials.html#using-hmac-credentials
I am having the same issue as well using the AWS CLI. However, you can do the same operation using cURL and providing your IBM Cloud IAM token.
curl -X "PUT" "https://{endpoint}/{bucket-name}/{object-name}?acl" \
-H "x-amz-acl: public-read" \
-H "Authorization: Bearer {token}" \
-H "Content-Type: text/plain; charset=utf-8" \

HashiCorp Vault Mongo error

I'm trying to run the default configuration for hashicorp and mongo but I can't complete the tutorial from here: https://www.vaultproject.io/docs/secrets/databases/mongodb.html.
It crashes here:
vault write database/config/mongodb \
plugin_name=mongodb-database-plugin \
allowed_roles="readonly" \
connection_url="mongodb://admin:Password!#mongodb.acme.com:27017/admin?ssl=true"
-bash: !mongodb.acme.com: event not found
I have mongo installed and done correctly the vault mount database
There are several things to change from that command.
vault write database/config/mongodb \
plugin_name=mongodb-database-plugin \
allowed_roles="readonly" \
connection_url="mongodb://admin:passwd#127.0.0.1:27017/admin"
Admin:Password has to be changed to the current admin:password credentials (keep in mind that mongo don't have any admin:password with a fresh installation).
!#mongodb.acme.com had to be changed to the ip of the machine where mongo is.
Finally had to disable the ssl ssl=false or removing it directly.