Vault is already initialized error message - hashicorp-vault

I deployed the following helm chart for vault and I get the following error "Vault is already initialized" when doing "vault operator init" command. I do not understand why it is already initialized.
Also, when I enable readinessProbe the pod keeps restating I assume because it is not initialized properly.
global:
enabled: true
tlsDisable: false
server:
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/ca.crt
logLevel: debug
logFormat: standard
readinessProbe:
enabled: false
authDelegator:
enabled: true
extraVolumes:
- type: secret
name: vault-server-tls # Matches the ${SECRET_NAME} from above
standalone:
enabled: true
config: |
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/tls.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/tls.key"
}
storage "file" {
path = "/vault/data"
}

Related

Vault Helm Chart not using Config from values.yaml

I'm trying to install Hashicorp Vault with the official Helm chart from Hashicorp. I'm installing it via Argocd via the UI. I have a git repo with values.yaml file that specifies some config thats not default (for example, ha mode and AWS KMS unseal). When I set up the chart via the Argocd web UI, I can point it to the values.yaml file, and see the values I set in the parameters section of the app. However, when I deploy the chart, the config doesn't get applied. I checked the configmap created by the chart, and it seems to follow the defaults despite my overrides. I'm thinking perhaps I'm using argocd wrong as I'm fairly new to it, although it very clearly shows the overrides from my values.yaml in the app's parameters.
Here is the relevant section of my values.yaml
server:
extraSecretEnvironmentVars:
- envName: AWS_SECRET_ACCESS_KEY
secretName: vault
secretKey: AWS_SECRET_ACCESS_KEY
- envName: AWS_ACCESS_KEY_ID
secretName: vault
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_KMS_KEY_ID
secretName: vault
secretKey: AWS_KMS_KEY_ID
ha:
enabled: true
replicas: 3
apiAddr: https://myvault.com:8200
clusterAddr: https://myvault.com:8201
raft:
enabled: true
setNodeId: false
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
seal "awskms" {
region = "us-west-2"
kms_key_id = "$VAULT_KMS_KEY_ID"
}
However, the deployed config looks like this
disable_mlock = true
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
# Enable unauthenticated metrics access (necessary for Prometheus Operator)
#telemetry {
# unauthenticated_metrics_access = "true"
#}
}
storage "file" {
path = "/vault/data"
}
# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
#seal "gcpckms" {
# project = "vault-helm-dev"
# region = "global"
# key_ring = "vault-helm-unseal-kr"
# crypto_key = "vault-helm-unseal-key"
#}
# Example configuration for enabling Prometheus metrics in your config.
#telemetry {
# prometheus_retention_time = "30s",
# disable_hostname = true
#}
I've tried several changes to this config, such as setting the AWS_KMS_UNSEAL environment variable, which doesnt seem to get applied. I've also execed into the containers and none of my environment variables seem to be set when I run a printenv command. I can't seem to figure out why its deploying the pods with the default config.
With the help of murtiko I figured this out. My indentation of the config block was off. It needs to be nested below the ha block. My working config looks like this:
global:
enabled: true
server:
extraSecretEnvironmentVars:
- envName: AWS_REGION
secretName: vault
secretKey: AWS_REGION
- envName: AWS_ACCESS_KEY_ID
secretName: vault
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_SECRET_ACCESS_KEY
secretName: vault
secretKey: AWS_SECRET_ACCESS_KEY
- envName: VAULT_AWSKMS_SEAL_KEY_ID
secretName: vault
secretKey: VAULT_AWSKMS_SEAL_KEY_ID
ha:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "awskms" {
}
storage "raft" {
path = "/vault/data"
}
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "awskms" {
}
storage "raft" {
path = "/vault/data"
}

Deploying HA Vault To GKE - dial tcp 127.0.0.1:8200: connect: connection refused

As per the official documentation (https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-google-cloud-gke), the following works as expected:
helm install vault hashicorp/vault \
--set='server.ha.enabled=true' \
--set='server.ha.raft.enabled=true'
I can then run:
kubectl exec vault-0 -- vault status
And this works perfectly fine.
However, I've noticed that when if I don't have raft enabled, I get the dial tcp 127.0.0.1:8200: connect: connection refused" error message:
helm install vault hashicorp/vault \
--set='server.ha.enabled=true'
I'm trying to work out why my Vault deployment is giving me the same issue.
I'm trying to deploy Vault into GKE with auto-unseal keys and a Google Cloud Storage backend configured.
My values.yaml file contains:
global:
enabled: true
tlsDisable: false
injector:
enabled: true
replicas: 1
port: 8080
leaderElector:
enabled: true
image:
repository: "hashicorp/vault-k8s"
tag: "latest"
pullPolicy: IfNotPresent
agentImage:
repository: "hashicorp/vault"
tag: "latest"
authPath: "auth/kubernetes"
webhook:
failurePolicy: Ignore
matchPolicy: Exact
objectSelector: |
matchExpressions:
- key: app.kubernetes.io/name
operator: NotIn
values:
- {{ template "vault.name" . }}-agent-injector
certs:
secretName: vault-lab.company.com-cert
certName: tls.crt
keyName: tls.key
server:
enabled: true
image:
repository: "hashicorp/vault"
tag: "latest"
pullPolicy: IfNotPresent
extraEnvironmentVars:
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/vault-gcs/service-account.json
GOOGLE_REGION: europe-west2
GOOGLE_PROJECT: sandbox-vault-lab
volumes:
- name: vault-gcs
secret:
secretName: vault-gcs
- name: vault-lab-cert
secret:
secretName: vault-lab.company.com-cert
volumeMounts:
- name: vault-gcs
mountPath: /vault/userconfig/vault-gcs
readOnly: true
- name: vault-lab-cert
mountPath: /etc/tls
readOnly: true
service:
enabled: true
type: NodePort
externalTrafficPolicy: Cluster
port: 8200
targetPort: 8200
annotations:
cloud.google.com/app-protocols: '{"http":"HTTPS"}'
beta.cloud.google.com/backend-config: '{"ports": {"http":"config-default"}}'
ha:
enabled: true
replicas: 3
config: |
listener "tcp" {
tls_disable = 0
tls_min_version = "tls12"
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "gcs" {
bucket = "vault-lab-bucket"
ha_enabled = "true"
}
service_registration "kubernetes" {}
# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
seal "gcpckms" {
project = "sandbox-vault-lab"
region = "global"
key_ring = "vault-helm-unseal-kr"
crypto_key = "vault-helm-unseal-key"
}
Something here must be misconfigured, but what, I'm unsure.
Any help would be appreciated.
EDIT:
Even after configuring Raft, I still encounter the same issue:
raft:
enabled: true
setNodeId: false
config: |
ui = false
listener "tcp" {
# tls_disable = 0
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/etc/tls/tls.crt"
tls_key_file = "/etc/tls/tls.key"
}
#storage "raft" {
# path = "/vault/data"
#}
storage "gcs" {
bucket = "vault-lab-bucket"
ha_enabled = "true"
}
service_registration "kubernetes" {}

Hashicorp Vault error - Client sent an HTTP request to an HTTPS server

Having trouble deploying Hashicorp Vault on kubernetes/helm. Can't get vault to work at all. I've really tried changing almost all the parameters I could and still can't get it to work and I don't know where the issue lies exactly.
The error I get is mainly based on Error Checking Seal status/Client sent an HTTP request to an HTTPS server.
If I set tls_disable=true inside the .Values.ha.config then I get an error that vault is sealed but I still can't view the UI... I feel like deploying vault has been bipolar and it sometimes works and sometimes doesn't. Then I can't replicate where the bug lied either. This has been a headache.
Here is my values.yaml file:
server:
enabled: true
ingress:
enabled: true
annotations:
cert.<issuer>.cloud/issuer: <intermediate-hostname>
cert.<issuer>.cloud/secretname: vault-server-tls
cert.<issuer>.cloud/purpose: managed
dns.<issuer>.cloud/class: <class>
dns.<issuer>.cloud/dnsnames: "<hostname>"
dns.<issuer>.cloud/ttl: "600"
hosts:
- host: "vault.<hostname>"
paths: []
tls:
- secretName: vault-server-tls
hosts:
- vault.<hostname>
extraVolumes:
- type: secret
name: vault-server-tls
service:
enabled: true
port: 8200
targetPort: 443
ha:
enabled: true
replicas: 3
raft:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = false
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/tls.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/tls.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "raft" {
path = "/vault/data"
}
config: |
ui = true
listener "tcp" {
tls_disable = false
address = "[::]:443"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/tls.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/tls.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
tls_require_and_verify_client_cert = false
}
storage "consul" {
path = "vault"
address = "HOST_IP:8500"
}
disable_mlock = true
ui:
enabled: true
serviceType: LoadBalancer
externalPort: 443
targetPort: 8200
EDIT: I'm now able to view the UI from the LoadBalancer but not from the hostname set in dns.<issuer>.cloud/dnsnames: "<hostname>" under the ingress.annotations
Still get the error but can view the UI via the LoadBalancer: Readiness probe failed. Error unsealing: Error making API request. URL: PUT http://127.0.0.1:8200/v1/sys/unsealCode: 400. Raw Message: Client sent an HTTP request to an HTTPS server.
As you mentioned you faced issued of Error Checking Seal status/Client sent an HTTP request to an HTTPS server & vault is sealed
Once you have deployed the vault using the helm chart you have to unseal the vault using the CLI first time and after that UI will be available to use.
Reference document : https://learn.hashicorp.com/tutorials/vault/kubernetes-raft-deployment-guide?in=vault/kubernetes#initialize-and-unseal-vault
Get the list of pods
kubectl get pods --selector='app.kubernetes.io/name=vault' --namespace=' vault'
Exec into the pods
kubectl exec --stdin=true --tty=true vault-0 -- vault operator init
kubectl exec --stdin=true --tty=true vault-0 -- vault operator unseal
once you will unseal the vault your PODs status will get changed to 1/1 in Ready instead of 0/1

UI 404 - Vault Kubernetes

I'm testing out Vault in Kubernetes and am installing via the Helm chart. I've created an overrides file, it's an amalgamation of a few different pages from the official docs.
The pods seem to come up OK and into Ready status and I can unseal vault manually using 3 of the keys generated. I'm having issues getting 404 when browsing the UI though, the UI is presented externally on a Load Balancer in AKS. Here's my config:
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
# livenessProbe:
# enabled: true
# path: "/v1/sys/health?standbyok=true"
# initialDelaySeconds: 60
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
extraVolumes:
- type: secret
name: vault-server-tls # Matches the ${SECRET_NAME} from above
standalone:
enabled: true
config: |
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "file" {
path = "/vault/data"
}
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 443
# For Added Security, edit the below
# loadBalancerSourceRanges:
# 5.69.25.6/32
I'm still trying to get to grips with Vault. My liveness probe is commented out because it was permanently failing and causing the pod to be re-scheduled, even though checking the vault service status it appeared to be healthy and awaiting an unseal. That's a side issue though compared to the UI, just mentioning in case the failing liveness is related.
Thanks!
So, I don't think the documentation around deploying in Kubernetes from Helm is really that clear but I was basically missing a ui = true flag from the HCL config stanza. It's to be noted that this is in addition to the value passed to the helm chart:
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 443
Which I had mistakenly assumed was enough to enable the UI.
Here's the config now, with working UI:
global:
enabled: true
tlsDisable: false
injector:
enabled: false
server:
readinessProbe:
enabled: true
path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204"
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
extraVolumes:
- type: secret
name: vault-server-tls # Matches the ${SECRET_NAME} from above
standalone:
enabled: true
config: |
ui = true
listener "tcp" {
address = "[::]:8200"
cluster_address = "[::]:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
storage "file" {
path = "/vault/data"
}
# Vault UI
ui:
enabled: true
serviceType: "LoadBalancer"
serviceNodePort: null
externalPort: 443

Spinnaker deployment in kubernetes is failing

Background: I have setup a ServiceAccount and spinnaker-role-binding in the default namespace. Created the spinnaker namespace for Kubernetes. Deployed services on port 9000 and 8084.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/spin-deck-np LoadBalancer hidden <pending> 9000:31295/TCP 9m39s
service/spin-gate-np LoadBalancer hidden <pending> 8084:32161/TCP 9m39s
Created halyard deployment in the default namespace and configured hal inside it.
Problem: When I run the hal deploy apply command then I am getting below error
Problems in Global:
! ERROR Unexpected exception:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET
at:
https://kubernetes.default/apis/extensions/v1beta1/namespaces/spinnaker/replicasets.
Message: the server could not find the requested resource. Received status:
Status(apiVersion=v1, code=404, details=StatusDetails(causes=[], group=null,
kind=null, name=null, retryAfterSeconds=null, uid=null,
additionalProperties={}), kind=Status, message=the server could not find the
requested resource, metadata=ListMeta(resourceVersion=null, selfLink=null,
additionalProperties={}), reason=NotFound, status=Failure,
additionalProperties={}).
Below is my kube config file at /home/spinnaker/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://kubernetes.default
name: default
contexts:
- context:
cluster: default
user: user
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: user
user:
token: *********************
Below is the hal config file at /home/spinnaker/.hal/config
currentDeployment: default
deploymentConfigurations:
- name: default
version: 1.8.1
providers:
appengine:
enabled: false
accounts: []
aws:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: us-west-2
defaults:
iamRole: BaseIAMRole
ecs:
enabled: false
accounts: []
azure:
enabled: false
accounts: []
bakeryDefaults:
templateFile: azure-linux.json
baseImages: []
dcos:
enabled: false
accounts: []
clusters: []
dockerRegistry:
enabled: true
accounts:
- name: my-docker-registry
requiredGroupMembership: []
providerVersion: V1
permissions: {}
address: https://index.docker.io
email: fake.email#spinnaker.io
cacheIntervalSeconds: 30
clientTimeoutMillis: 60000
cacheThreads: 1
paginateSize: 100
sortTagsByDate: false
trackDigests: false
insecureRegistry: false
repositories:
- library/nginx
primaryAccount: my-docker-registry
google:
enabled: false
accounts: []
bakeryDefaults:
templateFile: gce.json
baseImages: []
zone: us-central1-f
network: default
useInternalIp: false
kubernetes:
enabled: true
accounts:
- name: my-k8s-account
requiredGroupMembership: []
providerVersion: V1
permissions: {}
dockerRegistries:
- accountName: my-docker-registry
namespaces: []
configureImagePullSecrets: true
cacheThreads: 1
namespaces: []
omitNamespaces: []
kinds: []
omitKinds: []
customResources: []
cachingPolicies: []
kubeconfigFile: /home/spinnaker/.kube/config
oauthScopes: []
oAuthScopes: []
primaryAccount: my-k8s-account
openstack:
enabled: false
accounts: []
bakeryDefaults:
baseImages: []
oracle:
enabled: false
accounts: []
deploymentEnvironment:
size: SMALL
type: Distributed
accountName: my-k8s-account
updateVersions: true
consul:
enabled: false
vault:
enabled: false
customSizing: {}
gitConfig:
upstreamUser: spinnaker
persistentStorage:
persistentStoreType: gcs
azs: {}
gcs:
jsonPath: /home/spinnaker/.gcp/gcs-account.json
project: round-reality
bucket: spin-94cc2e22-8ece-4bc1-80fd-e9df71c1d9f4
rootFolder: front50
bucketLocation: us
redis: {}
s3:
rootFolder: front50
oracle: {}
features:
auth: false
fiat: false
chaos: false
entityTags: false
jobs: false
metricStores:
datadog:
enabled: false
prometheus:
enabled: false
add_source_metalabels: true
stackdriver:
enabled: false
period: 30
enabled: false
notifications:
slack:
enabled: false
timezone: America/Los_Angeles
ci:
jenkins:
enabled: false
masters: []
travis:
enabled: false
masters: []
security:
apiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://External IP of worker:8084
uiSecurity:
ssl:
enabled: false
overrideBaseUrl: http://External IP of worker:9000
authn:
oauth2:
enabled: false
client: {}
resource: {}
userInfoMapping: {}
saml:
enabled: false
ldap:
enabled: false
x509:
enabled: false
iap:
enabled: false
enabled: false
authz:
groupMembership:
service: EXTERNAL
google:
roleProviderType: GOOGLE
github:
roleProviderType: GITHUB
file:
roleProviderType: FILE
enabled: false
artifacts:
bitbucket:
enabled: false
accounts: []
gcs:
enabled: false
accounts: []
github:
enabled: false
accounts: []
gitlab:
enabled: false
accounts: []
http:
enabled: false
accounts: []
s3:
enabled: false
accounts: []
pubsub:
google:
enabled: false
subscriptions: []
canary:
enabled: false
serviceIntegrations:
- name: google
enabled: false
accounts: []
gcsEnabled: false
stackdriverEnabled: false
- name: prometheus
enabled: false
accounts: []
- name: datadog
enabled: false
accounts: []
- name: aws
enabled: false
accounts: []
s3Enabled: false
reduxLoggerEnabled: true
defaultJudge: NetflixACAJudge-v1.0
stagesEnabled: true
templatesEnabled: true
showAllConfigsEnabled: true
Used below commands in hal to interact with kubernetes
kubectl config set-cluster default --server=https://kubernetes.default --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubectl config set-context default --cluster=default
token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-credentials user --token=$token
kubectl config set-context default --user=user
kubectl config use-context default
How could I resolve the error for spinnaker deployment?
Thank you
As per your config file it's looking like kubeconfig context(Search it) not setup correctly.
Please use below command
# Setting Variable for admin kubeconfig file location(Please fetch config file with --admin - if possible)
kubeconfig_path="<my-k8s-account-admin-file-path>"
hal config provider kubernetes account add my-k8s-account --provider-version v2 \
--kubeconfig-file "$kubeconfig_path" \
--context $(kubectl config current-context --kubeconfig "$kubeconfig_path")
After execution of above command you will be able to see context in your config file, which is missing in current config.