Vault server token login doesn't work as per lease time - hashicorp-vault

We are using Hashicorp Vault with Consul and Filesystem as storage, facing issue on login every time my token duration is infinity but still ask for token need to be sealed for every few hours with again login?
config.hcl:
`ui = true
storage "consul" {
address = "127.0.0.1:8500"
path = "vault"
}
backend "file" {
path = "/mnt/vault/data"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
telemetry {
statsite_address = "127.0.0.1:8125"
disable_hostname = true
}`
Token lookup:
Key Value
• -----
token **********
token_accessor ***********
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]

Related

Autounseal Vault with GCP KMS

I would like to use auto unseal vault mechanism using the GCP KMS.
I have been following this tutorial (section: 'Google KMS Auto Unseal') and applying the official hashicorp helm chart with the following values:
global:
enabled: true
server:
logLevel: "debug"
injector:
logLevel: "debug"
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: ESGI-projects
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
extraVolumes:
- type: 'secret'
name: 'kms-creds'
ha:
enabled: true
replicas: 3
raft:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "ESGI-projects"
region = "global"
key_ring = "gitter"
crypto_key = "vault-helm-unseal-key"
}
storage "raft" {
path = "/vault/data"
}
I have created a kms-creds with the json credentials for a service account (I have tried with Cloud KMS Service Agent and owner role but none of them work.
Here are the keys in my key ring :
My cluster is just a local cluster created with kind.
On the first replica of the vault server all seems ok (but not running though):
And on the two others got the normal message claiming that the vault is sealed:
Any idea what could be wrong? Should I create one key for each replica?
OK well, I have succeeded in setting in place the Vault with auto unseal !
What I did:
Change the project (the id was required, not the name)
I disabled the raft (raft.enabled: false)
I moved the backend to google cloud storage adding to the config:
storage "gcs" {
bucket = "gitter-secrets"
ha_enabled = "true"
}
ha_enabled=true was compulsory (with regional bucket)
My final helm values is:
global:
enabled: true
server:
logLevel: "debug"
injector:
logLevel: "debug"
extraEnvironmentVars:
GOOGLE_REGION: global
GOOGLE_PROJECT: esgi-projects-354109
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/kms-creds/credentials.json
extraVolumes:
- type: 'secret'
name: 'kms-creds'
ha:
enabled: true
replicas: 3
raft:
enabled: false
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "gcpckms" {
project = "esgi-projects-354109"
region = "global"
key_ring = "gitter"
crypto_key = "vault-helm-unseal-key"
}
storage "gcs" {
bucket = "gitter-secrets"
ha_enabled = "true"
}
Using a service account with permissions:
Cloud KMS CryptoKey Encrypter/Decrypter
Storage Object Admin Permission on gitter-secrets only
I had an issue at first, the vault-0 needed to run a vault operator init. After trying several things (post install hooks among others) and comming back to the initial state the pod were unsealing normally without running anything.

Vault transit engine auto unseal does not pass VAULT_TOKEN when starting up?

VAULT-1 Unseal provider:
cat /etc/vault.d/vault.json
"listener": [{
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/opt/vault/data"
}
},
"max_lease_ttl": "1h",
"default_lease_ttl": "1h"
}
VAULT-2 Unseal client, this is the vault attempting to auto unseal itself:
cat /etc/vault.d/vault.hcl
disable_mlock = true
ui=true
storage "file" {
path = "/vault-2/data"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = "true"
}
seal "transit" {
address = "http://192.168.100.100:8200"
disable_renewal = "false"
key_name = "autounseal"
mount_path = "transit/"
tls_skip_verify = "true"
}
Token seems valid on VAULT-1:
vault token lookup s.XazV
Key Value
--- -----
accessor eCH1R3G
creation_time 1637091280
creation_ttl 10h
display_name token
entity_id n/a
expire_time 2021-11-17T00:34:40.837284665-05:00
explicit_max_ttl 0s
id s.XazV
issue_time 2021-11-16T14:34:40.837289691-05:00
meta <nil>
num_uses 0
on VAULT-2, I have an env var set:
export VAULT_TOKEN="s.XazV"
I have the policy enabled accordingly on VAULT-1. However when starting the service on VAULT-2:
vault2 vault[758]: URL: PUT http://192.168.100.100:8200/v1/transit/encrypt/autounseal
vault2 vault[758]: Code: 400. Errors:
vault2 vault[758]: * missing client token
Thank you.
If you're starting up the Vault service with systemctl, you may need to configure the service file to include the token in an Environment configuration rather than with export.
https://askubuntu.com/questions/940679/pass-environment-variables-to-services-started-with-systemctl#940797

Hashicorp Vault won't let me delete a Policy even using the root token

I am trying to delete a policy.
After logging in with the root token, I do the following:
$ vault policy delete testttt
Error deleting testttt: Error making API request.
URL: DELETE https://vault.local:8200/v1/sys/policies/acl/testttt
Code: 400. Errors:
* failed to delete policy: AccessDenied: Access Denied
status code: 403, request id: VB6YWECETDJ5KB7Q, host id:
S0FJvs41pSbzTmP1lDr/aVSOPjeRVz4Vk/ofkFHu8jvNjfzk6ARnY33qzP/usqmpVDExwLlsF44=
My config file looks like this:
storage "s3" {
access_key = "XXXX"
secret_key = "XXXX"
bucket = "XXXX-vault"
region = "eu-central-1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/etc/vault.d/fullchain.pem"
tls_key_file = "/etc/vault.d/privkey.pem"
}
api_addr = "http://0.0.0.0:8200"
cluster_addr = "https://0.0.0.0:8201"
ui = true
Something seems totally off, as after using the root token in the UI, I also just see this:
null is not an object (evaluating 'l.userRootNamespace')

Add secret to freshly created Azure AKS using Terraform Kubernetes provider fails

I am creating a kubernetes cluster with the Azure Terraform provider and trying to add a secret to it. The cluster gets created fine but I am getting errors with authenticating to the cluster when creating the secret. I tried 2 different Terraform Kubernetes provider configurations. Here is the main configuration:
variable "client_id" {}
variable "client_secret" {}
resource "azurerm_resource_group" "rg-example" {
name = "rg-example"
location = "East US"
}
resource "azurerm_kubernetes_cluster" "k8s-example" {
name = "k8s-example"
location = azurerm_resource_group.rg-example.location
resource_group_name = azurerm_resource_group.rg-example.name
dns_prefix = "k8s-example"
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_B2s"
}
service_principal {
client_id = var.client_id
client_secret = var.client_secret
}
role_based_access_control {
enabled = true
}
}
resource "kubernetes_secret" "secret_example" {
metadata {
name = "mysecret"
}
data = {
"something" = "super secret"
}
depends_on = [
azurerm_kubernetes_cluster.k8s-example
]
}
provider "azurerm" {
version = "=2.29.0"
features {}
}
output "host" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
}
output "cluster_username" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.username
}
output "cluster_password" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.password
}
output "client_key" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_key
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_certificate
}
output "cluster_ca_certificate" {
value = azurerm_kubernetes_cluster.k8s-example.kube_config.0.cluster_ca_certificate
}
Here is the first Kubernetes provider configuration using certificates:
provider "kubernetes" {
version = "=1.13.2"
load_config_file = "false"
host = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
client_certificate = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_certificate
client_key = azurerm_kubernetes_cluster.k8s-example.kube_config.0.client_key
cluster_ca_certificate = azurerm_kubernetes_cluster.k8s-example.kube_config.0.cluster_ca_certificate
}
And the error I'm receiving:
kubernetes_secret.secret_example: Creating...
Error: Failed to configure client: tls: failed to find any PEM data in certificate input
Here is the second Kubernetes provider configuration using HTTP Basic Authorization:
provider "kubernetes" {
version = "=1.13.2"
load_config_file = "false"
host = azurerm_kubernetes_cluster.k8s-example.kube_config.0.host
username = azurerm_kubernetes_cluster.k8s-example.kube_config.0.username
password = azurerm_kubernetes_cluster.k8s-example.kube_config.0.password
}
And the error I'm receiving:
kubernetes_secret.secret_example: Creating...
Error: Post "https://k8s-example-c4a78c03.hcp.eastus.azmk8s.io:443/api/v1/namespaces/default/secrets": x509: certificate signed by unknown authority
ANALYSIS
I checked the outputs of azurerm_kubernetes_cluster.k8s-example and the data seems valid (username, password, host, etc..) Maybe I need a SSL certificate on my Kubernetes cluster, however I'm am not certain, as I'm new to this. Can someone help me out ?
According to this issue in hashicorp/terraform-provider-kubernetes, you need to use base64decode(). The example that author used:
provider "kubernetes" {
host = "${google_container_cluster.k8sexample.endpoint}"
username = "${var.master_username}"
password = "${var.master_password}"
client_certificate = "${base64decode(google_container_cluster.k8sexample.master_auth.0.client_certificate)}"
client_key = "${base64decode(google_container_cluster.k8sexample.master_auth.0.client_key)}"
cluster_ca_certificate = "${base64decode(google_container_cluster.k8sexample.master_auth.0.cluster_ca_certificate)}"
}
That author said they got the same error as you if they left out the base64decode. You can read more about that function here: https://www.terraform.io/docs/configuration/functions/base64decode.html

VAULT_CLIENT_TOKEN keeps expiring every 24h

Environment:
Vault + Consul, all latest. Integrating Concourse (3.14.0) with Vault. All tokens and keys are throw-away. This is just a test cluster.
Problem:
No matter what I do, I get 768h as the token_duration value. Also, overnight my approle token keeps expiring no matter what I do. I have to regenerate token and pass it to Concourse and restart the service. I want this token not to expire.
[root#k1 etc]# vault write auth/approle/login role_id="34b73748-7e77-f6ec-c5fd-90c24a5a98f3" secret_id="80cc55f1-bb8b-e96c-78b0-fe61b243832d" duration=0
Key Value
--- -----
token 9a6900b7-062d-753f-131c-a2ac7eb040f1
token_accessor 171aeb1c-d2ce-0261-e20f-8ed6950d1d2a
token_duration 768h
token_renewable true
token_policies ["concourse" "default"]
identity_policies []
policies ["concourse" "default"]
token_meta_role_name concourse
[root#k1 etc]#
So, I use token - 9a6900b7-062d-753f-131c-a2ac7eb040f1 for my Concourse to access secrets and all is good, until 24h later. It gets expired.
I set duration to 0, but It didn't help.
$ vault write auth/approle/role/concourse secret_id_ttl=0 period=0 policies=concourse secret_id_num_uses=0 token_num_uses=0
My modified vaultconfig.hcl looks like this:
storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
token = "95FBC040-C484-4D16-B489-AA732DB6ADF1"
#token = "0b4bc7c7-7eb0-4060-4811-5f9a7185aa6f"
}
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_min_version = "tls10"
tls_disable = 1
}
cluster_addr = "http://192.168.163.132:8201"
api_addr = "http://192.168.163.132:8200"
disable_mlock = true
disable_cache = true
ui = true
default_lease_ttl = 0
cluster_name = "testcluster"
raw_storage_endpoint = true
My Concourse policy is vanilla:
[root#k1 etc]# vault policy read concourse
path "concourse/*" {
policy = "read"
capabilities = ["read", "list"]
}
[root#k1 etc]#
Look up token - 9a6900b7-062d-753f-131c-a2ac7eb040f1
[root#k1 etc]# vault token lookup 9a6900b7-062d-753f-131c-a2ac7eb040f1
Key Value
--- -----
accessor 171aeb1c-d2ce-0261-e20f-8ed6950d1d2a
creation_time 1532521379
creation_ttl 2764800
display_name approle
entity_id 11a0d4ac-10aa-0d62-2385-9e8071fc4185
expire_time 2018-08-26T07:22:59.764692652-05:00
explicit_max_ttl 0
id 9a6900b7-062d-753f-131c-a2ac7eb040f1
issue_time 2018-07-25T07:22:59.238050234-05:00
last_renewal 2018-07-25T07:24:44.764692842-05:00
last_renewal_time 1532521484
meta map[role_name:concourse]
num_uses 0
orphan true
path auth/approle/login
policies [concourse default]
renewable true
ttl 2763645
[root#k1 etc]#
Any pointers, feedback is very appreciated.
Try setting the token_ttl and token_max_ttl parameters instead of the secret_id_ttl when creating the new AppRole.
You should also check your Vault default_lease_ttl and max_lease_ttl, they might be set to 24h