Vault transit engine auto unseal does not pass VAULT_TOKEN when starting up? - hashicorp-vault

VAULT-1 Unseal provider:
cat /etc/vault.d/vault.json
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/opt/vault/data"
}
},
"max_lease_ttl": "1h",
"default_lease_ttl": "1h"
}
VAULT-2 Unseal client, this is the vault attempting to auto unseal itself:
cat /etc/vault.d/vault.hcl
disable_mlock = true
ui=true
storage "file" {
path = "/vault-2/data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = "true"
}
seal "transit" {
address = "http://192.168.100.100:8200"
disable_renewal = "false"
key_name = "autounseal"
mount_path = "transit/"
tls_skip_verify = "true"
}
Token seems valid on VAULT-1:
vault token lookup s.XazV
Key Value
--- -----
accessor eCH1R3G
creation_time 1637091280
creation_ttl 10h
display_name token
entity_id n/a
expire_time 2021-11-17T00:34:40.837284665-05:00
explicit_max_ttl 0s
id s.XazV
issue_time 2021-11-16T14:34:40.837289691-05:00
meta <nil>
num_uses 0
on VAULT-2, I have an env var set:
export VAULT_TOKEN="s.XazV"
I have the policy enabled accordingly on VAULT-1. However when starting the service on VAULT-2:
vault2 vault[758]: URL: PUT http://192.168.100.100:8200/v1/transit/encrypt/autounseal
vault2 vault[758]: Code: 400. Errors:
vault2 vault[758]: * missing client token
Thank you.

If you're starting up the Vault service with systemctl, you may need to configure the service file to include the token in an Environment configuration rather than with export.
https://askubuntu.com/questions/940679/pass-environment-variables-to-services-started-with-systemctl#940797

Related

No known RoleID Vault Agent

I'm using vault approle auth method to fetch secrets from vault. Below is my vault agent configmap.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
config-init.hcl: |
"auto_auth" = {
"method" = {
"config" = {
"role" = "test"
"role_id_file_path" = "roleid"
"secret_id_file_path" = "secretid"
"remove_secret_id_file_after_reading" = false
}
"type" = "approle"
}
"sink" = {
"config" = {
"path" = "/home/vault/.token"
}
"type" = "file"
"wrap_ttl" = "30m"
}
}
"vault" = {
"address" = "https://myvault.com"
}
"exit_after_auth" = true
"pid_file" = "/home/vault/.pid"
Then I'm referencing the above configmap in the deployment file.
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-configmap: 'my-configmap'
But I get below error
vault-agent-init 2022-07-20T10:43:13.306Z [ERROR] auth.handler: error getting path or data from method: error="no known role ID" backoff=4m51.03s
Create a secret file in Kubernetes by using the RoleID and SecretID and pass the below annotation
vault.hashicorp.com/extra-secret: "secret-file"

Autounseal Vault with GCP KMS

I would like to use auto unseal vault mechanism using the GCP KMS.
I have been following this tutorial (section: 'Google KMS Auto Unseal') and applying the official hashicorp helm chart with the following values:
global:
enabled: true
server:
logLevel: "debug"
injector:
logLevel: "debug"
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: ESGI-projects
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
extraVolumes:
- type: 'secret'
name: 'kms-creds'
ha:
enabled: true
replicas: 3
raft:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "ESGI-projects"
region = "global"
key_ring = "gitter"
crypto_key = "vault-helm-unseal-key"
}
storage "raft" {
path = "/vault/data"
}
I have created a kms-creds with the json credentials for a service account (I have tried with Cloud KMS Service Agent and owner role but none of them work.
Here are the keys in my key ring :
My cluster is just a local cluster created with kind.
On the first replica of the vault server all seems ok (but not running though):
And on the two others got the normal message claiming that the vault is sealed:
Any idea what could be wrong? Should I create one key for each replica?
OK well, I have succeeded in setting in place the Vault with auto unseal !
What I did:
Change the project (the id was required, not the name)
I disabled the raft (raft.enabled: false)
I moved the backend to google cloud storage adding to the config:
storage "gcs" {
bucket = "gitter-secrets"
ha_enabled = "true"
}
ha_enabled=true was compulsory (with regional bucket)
My final helm values is:
global:
enabled: true
server:
logLevel: "debug"
injector:
logLevel: "debug"
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: esgi-projects-354109
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
extraVolumes:
- type: 'secret'
name: 'kms-creds'
ha:
enabled: true
replicas: 3
raft:
enabled: false
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "esgi-projects-354109"
region = "global"
key_ring = "gitter"
crypto_key = "vault-helm-unseal-key"
}
storage "gcs" {
bucket = "gitter-secrets"
ha_enabled = "true"
}
Using a service account with permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Storage Object Admin Permission on gitter-secrets only
I had an issue at first, the vault-0 needed to run a vault operator init. After trying several things (post install hooks among others) and comming back to the initial state the pod were unsealing normally without running anything.

How to configure S3 as backend storage for hashicorp vault

I have a running EKS cluster I want to deploy Vault on that cluster using Terraform, my code is working fine while deploying. This is my main.tf
data "aws_eks_cluster" "default" {
name = var.eks_cluster_name
}
data "aws_eks_cluster_auth" "default" {
name = var.eks_cluster_name
}
resource "kubernetes_namespace" "vault" {
metadata {
name = "vault"
}
}
resource "helm_release" "vault" {
name = "vault"
repository = "https://helm.releases.hashicorp.com/"
chart = "vault"
namespace = kubernetes_namespace.vault.metadata.0.name
values = [
"${file("values.json")}"
]
}
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.default.token
load_config_file = false
}
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.default.token
load_config_file = false
}
}
And this is values.json
server:
image:
repository: vault
tag: latest
dataStorage:
enabled: true
auditStorage:
enabled: true
ha:
enabled: true
replicas: 1
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "s3" {
access_key = "xxxxxxxxx"
secret_key = "xxxxxxxxxx"
bucket = "xxxx-vault"
region = "xxxx-xxxx-x"
}
service_registration "kubernetes" {}
extraVolumes:
- type: secret
name: tls
extraEnvironmentVars:
VAULT_ADDR: https://127.0.0.1:8200
VAULT_SKIP_VERIFY: true
ui:
enabled: true
serviceType: LoadBalancer
but it is not taking my S3 bucket as storage after deploy every time it is taking file system as storage not given S3 bucket. Whats wrong here?
think you missed a key in your values files:
ha:
enabled: true
replicas: 1
config: |
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "s3" {
access_key = "xxxxxxxxx"
secret_key = "xxxxxxxxxx"
bucket = "xxxx-vault"
region = "xxxx-xxxx-x"
}
service_registration "kubernetes" {}

Vault server token login doesn't work as per lease time

We are using Hashicorp Vault with Consul and Filesystem as storage, facing issue on login every time my token duration is infinity but still ask for token need to be sealed for every few hours with again login?
config.hcl:
`ui = true
storage "consul" {
address = "127.0.0.1:8500"
path = "vault"
}
backend "file" {
path = "/mnt/vault/data"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
telemetry {
statsite_address = "127.0.0.1:8125"
disable_hostname = true
}`
Token lookup:
Key Value
• -----
token **********
token_accessor ***********
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]

VAULT_CLIENT_TOKEN keeps expiring every 24h

Environment:
Vault + Consul, all latest. Integrating Concourse (3.14.0) with Vault. All tokens and keys are throw-away. This is just a test cluster.
Problem:
No matter what I do, I get 768h as the token_duration value. Also, overnight my approle token keeps expiring no matter what I do. I have to regenerate token and pass it to Concourse and restart the service. I want this token not to expire.
[root#k1 etc]# vault write auth/approle/login role_id="34b73748-7e77-f6ec-c5fd-90c24a5a98f3" secret_id="80cc55f1-bb8b-e96c-78b0-fe61b243832d" duration=0
Key Value
--- -----
token 9a6900b7-062d-753f-131c-a2ac7eb040f1
token_accessor 171aeb1c-d2ce-0261-e20f-8ed6950d1d2a
token_duration 768h
token_renewable true
token_policies ["concourse" "default"]
identity_policies []
policies ["concourse" "default"]
token_meta_role_name concourse
[root#k1 etc]#
So, I use token - 9a6900b7-062d-753f-131c-a2ac7eb040f1 for my Concourse to access secrets and all is good, until 24h later. It gets expired.
I set duration to 0, but It didn't help.
$ vault write auth/approle/role/concourse secret_id_ttl=0 period=0 policies=concourse secret_id_num_uses=0 token_num_uses=0
My modified vaultconfig.hcl looks like this:
storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
token = "95FBC040-C484-4D16-B489-AA732DB6ADF1"
#token = "0b4bc7c7-7eb0-4060-4811-5f9a7185aa6f"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_min_version = "tls10"
tls_disable = 1
}
cluster_addr = "http://192.168.163.132:8201"
api_addr = "http://192.168.163.132:8200"
disable_mlock = true
disable_cache = true
ui = true
default_lease_ttl = 0
cluster_name = "testcluster"
raw_storage_endpoint = true
My Concourse policy is vanilla:
[root#k1 etc]# vault policy read concourse
path "concourse/*" {
policy = "read"
capabilities = ["read", "list"]
}
[root#k1 etc]#
Look up token - 9a6900b7-062d-753f-131c-a2ac7eb040f1
[root#k1 etc]# vault token lookup 9a6900b7-062d-753f-131c-a2ac7eb040f1
Key Value
--- -----
accessor 171aeb1c-d2ce-0261-e20f-8ed6950d1d2a
creation_time 1532521379
creation_ttl 2764800
display_name approle
entity_id 11a0d4ac-10aa-0d62-2385-9e8071fc4185
expire_time 2018-08-26T07:22:59.764692652-05:00
explicit_max_ttl 0
id 9a6900b7-062d-753f-131c-a2ac7eb040f1
issue_time 2018-07-25T07:22:59.238050234-05:00
last_renewal 2018-07-25T07:24:44.764692842-05:00
last_renewal_time 1532521484
meta map[role_name:concourse]
num_uses 0
orphan true
path auth/approle/login
policies [concourse default]
renewable true
ttl 2763645
[root#k1 etc]#
Any pointers, feedback is very appreciated.
Try setting the token_ttl and token_max_ttl parameters instead of the secret_id_ttl when creating the new AppRole.
You should also check your Vault default_lease_ttl and max_lease_ttl, they might be set to 24h