I'm trying to install Traefik on a K8s cluster using ArgoCD to deploy the official Helm chart. But I also need it to us an additional "values.yml" file. When I try to specify in the Application yaml file what additional values file to use, it fails to file not found for it.
Here is what I'm using:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argo-traefik-chart
namespace: argocd
spec:
project: default
source:
path: traefik
repoURL: https://github.com/traefik/traefik-helm-chart.git
targetRevision: HEAD
helm:
valueFiles:
- /traefik-values.yml
destination:
server: https://kubernetes.default.svc
namespace: 2195-leaf-dev-traefik
syncPolicy:
syncOptions:
- CreateNamespace=true
automated:
prune: true
selfHeal: true
Here is the traefik-value.yml file.
additionalArguments:
# Configure your CertificateResolver here...
#
# HTTP Challenge
# ---
# Generic Example:
# - --certificatesresolvers.generic.acme.email=your-email#example.com
# - --certificatesresolvers.generic.acme.caServer=https://acme-v02.api.letsencrypt.org/directory
# - --certificatesresolvers.generic.acme.httpChallenge.entryPoint=web
# - --certificatesresolvers.generic.acme.storage=/ssl-certs/acme-generic.json
#
# Prod / Staging Example:
# - --certificatesresolvers.staging.acme.email=your-email#example.com
# - --certificatesresolvers.staging.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory
# - --certificatesresolvers.staging.acme.httpChallenge.entryPoint=web
# - --certificatesresolvers.staging.acme.storage=/ssl-certs/acme-staging.json
# - --certificatesresolvers.production.acme.email=your-email#example.com
# - --certificatesresolvers.production.acme.caServer=https://acme-v02.api.letsencrypt.org/directory
# - --certificatesresolvers.production.acme.httpChallenge.entryPoint=web
# - --certificatesresolvers.production.acme.storage=/ssl-certs/acme-production.json
#
# DNS Challenge
# ---
# Cloudflare Example:
# - --certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare
# - --certificatesresolvers.cloudflare.acme.email=your-email#example.com
# - --certificatesresolvers.cloudflare.acme.dnschallenge.resolvers=1.1.1.1
# - --certificatesresolvers.cloudflare.acme.storage=/ssl-certs/acme-cloudflare.json
#
# Generic (replace with your DNS provider):
# - --certificatesresolvers.generic.acme.dnschallenge.provider=generic
# - --certificatesresolvers.generic.acme.email=your-email#example.com
# - --certificatesresolvers.generic.acme.storage=/ssl-certs/acme-generic.json
logs:
# Configure log settings here...
general:
level: DEBUG
ports:
# Configure your entrypoints here...
web:
# (optional) Permanent Redirect to HTTPS
redirectTo: websecure
websecure:
tls:
enabled: true
# (optional) Set a Default CertResolver
# certResolver: cloudflare
#env:
# Set your environment variables here...
#
# DNS Challenge Credentials
# ---
# Cloudflare Example:
# - name: CF_API_EMAIL
# valueFrom:
# secretKeyRef:
# key: email
# name: cloudflare-credentials
# - name: CF_API_KEY
# valueFrom:
# secretKeyRef:
# key: apiKey
# name: cloudflare-credentials
# Just to do it for now
envFrom:
- secretRef:
name: traefik-secrets
# Disable Dashboard
ingressRoute:
dashboard:
enabled: true
# Persistent Storage
persistence:
enabled: true
name: ssl-certs
size: 1Gi
path: /ssl-certs
# deployment:
# initContainers:
# # The "volume-permissions" init container is required if you run into permission issues.
# # Related issue: https://github.com/containous/traefik/issues/6972
# - name: volume-permissions
# image: busybox:1.31.1
# command: ["sh", "-c", "chmod -Rv 600 /ssl-certs/*"]
# volumeMounts:
# - name: ssl-certs
# mountPath: /ssl-certs
# Set Traefik as your default Ingress Controller, according to Kubernetes 1.19+ changes.
ingressClass:
enabled: true
isDefaultClass: true
The traefik-values.yml file is in the same sub-directory as this file. I fire this of with kubectl apply -f but when I got to look at it in the Argo GUI, it shows an error. I'll paste the entire thing below, but it looks like the important part is this:
` failed exit status 1: Error: open .traefik-values.yml: no such file or directory
It's putting a period before the name of the file. I tried different ways of specifying the file: .traefik-values.yml and ./treafik-values.yml. Those get translated to:
: Error: open .traefik/.traefik-values.yml: no such file or directory
When I do a helm install using the exact same traefik-values.yml file, I get exactly what I expect. And when I run the Argo without the alternate file, it deploys but with out the needed options of course.
Any ideas?
I assume this is because Argo will look for traefik-values.yml file in the repoURL (so, not in the location where Application file is), and it obviously doesn't exist there.
You can check more about this issue here. There you can also find a couple of proposed solutions. Some of them are:
a plugin to do a helm template with your values files
a custom CI pipeline to take the content of your values.yaml file and add it to Application manifest
putting values directly in Application manifest, skipping the values.yaml file altogether
having a chart that depends on a chart like here (I don't like this one as it is downloading twice from two different locations, plus this issue)
play around with kustomize
or wait for ArgoCD 2.5, it seems it will include a native solution out of the box, according to mentioned github issue
Related
I am a beginner who is using Prometheus and Grapana to monitor the value of REST API.
Prometheus, json-exporrter, and grafana both used the Helm chart, Prometheus installed as default values.yaml, and json-exporter installed as custom values.yaml.
I checked that the prometheus set the service monitor of json-exporter as a target, but I couldn't check its metrics.
How can I check the metrics? Below is the environment , screenshots and code.
environment :
kubernetes : v1.22.9
helm : v3.9.2
prometheus-json-exporter helm chart : v0.5.0
kube-prometheus-stack helm chart : 0.58.0
screenshots :
https://drive.google.com/drive/folders/1vfjbidNpE2_yXfxdX8oX5eWh4-wAx7Ql?usp=sharing
values.yaml
in custom_jsonexporter_values.yaml
# Default values for prometheus-json-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: quay.io/prometheuscommunity/json-exporter
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: []
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: []
podSecurityContext: {}
# fsGroup: 2000
# podLabels:
# Custom labels for the pod
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 7979
targetPort: http
name: http
serviceMonitor:
## If true, a ServiceMonitor CRD is created for a prometheus operator
## https://github.com/coreos/prometheus-operator
##
enabled: true
namespace: monitoring
scheme: http
# Default values that will be used for all ServiceMonitors created by `targets`
defaults:
additionalMetricsRelabels: {}
interval: 60s
labels:
release: prometheus
scrapeTimeout: 60s
targets:
- name : pi2
url: http://xxx.xxx.xxx.xxx:xxxx
labels: {} # Map of labels for ServiceMonitor. Overrides value set in `defaults`
interval: 60s # Scraping interval. Overrides value set in `defaults`
scrapeTimeout: 60s # Scrape timeout. Overrides value set in `defaults`
additionalMetricsRelabels: {} # Map of metric labels and values to add
ingress:
enabled: false
className: ""
annotations: []
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: []
tolerations: []
affinity: []
configuration:
config: |
---
modules:
default:
metrics:
- name: used_storage_byte
path: '{ .used }'
help: used storage byte
values:
used : '{ .used }'
labels: {}
- name: free_storage_byte
path: '{ .free }'
help: free storage byte
labels: {}
values :
free : '{ .free }'
- name: total_storage_byte
path: '{ .total }'
help: total storage byte
labels: {}
values :
total : '{ .total }'
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
rules: []
additionalVolumes: []
# - name: password-file
# secret:
# secretName: secret-name
additionalVolumeMounts: []
# - name: password-file
# mountPath: "/tmp/mysecret.txt"
# subPath: mysecret.txt
Firstly you can check the targets page on the Prometheus UI to see if a) your desired target is even defined and b) if the endpoint is reachable and being scraped.
However, you may need to troubleshoot a little if either of the above is not the case:
It is important to understand what is happening. You have deployed a Prometheus Operator to the cluster. If you have used the default values from the helm chart, you also deployed a Prometheus custom resource(CR). This instance is what is telling the Prometheus Operator how to ultimately configure the Prometheus running inside the pod. Certain things are static, like global metric relabeling for example, but most are dynamic, such as picking up new targets to actually scrape. Inside the Prometheus CR you will find options to specify serviceMonitorSelector and serviceMonitorNamespaceSelector (The behaviour is the same also for probes and podmonitors so I'm just going over it once). Assuming you leave the default set like serviceMonitorNamespaceSelector: {}, Prometheus Operator will look for ServiceMonitors in all namespaces on the cluster to which it has access via its serviceAccount. The serviceMonitorSelector field lets you specify a label and value combination that must be present on a serviceMonitor that must be present for it to be picked up. Once a or multiple serviceMonitors are found, that match the criteria in the selectors, Prometheus Operator adjusts the configuration in the actual Prometheus instance(tl;dr version) so you end up with proper scrape targets.
Step 1 for trouble shooting: Do your selectors match the labels and namespace of the serviceMonitors? Actually check those. The default on the prometheus operator helm chart expects a label release: prometheus-operator and in your config, you don't seem to add that to your json-exporter's serviceMonitor.
Step 2: The same behaviour as outline for how serviceMonitors are picked up, is happening in turn inside the serviceMonitor itself, so make sure that your service actually matches what is specced out in the serviceMonitor.
To deep dive further into the options you have and what the fields do, check the API documentation.
I am installing kube-prometheus-stack with Helm and I am adding some custome scraping configuration to Prometheus which requires authentication. I need to pass basic_auth with username and password in the values.yaml file.
The thing is that I need to commit the values.yaml file to a repo so I am wondering how can I have the username and password set on values file, maybe from a secret in Kubernetes or some other way?
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: myjob
scrape_interval: 20s
metrics_path: /metrics
static_configs:
- targets:
- myservice.default.svc.cluster.local:80
basic_auth:
username: prometheus
password: prom123456
Scrape config support specifying password_file parameter, so you can mount your own secret in volumes and volumemMounts:
Disclaimer, haven't tested it myself, not using a kube-prometheus-stack, but i guess something like this should work:
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: myjob
scrape_interval: 20s
metrics_path: /metrics
static_configs:
- targets:
- myservice.default.svc.cluster.local:80
basic_auth:
password_file: /etc/scrape_passwordfile
# Additional volumes on the output StatefulSet definition.
volumes:
- name: scrape_passwordfile
secret:
secretName: scrape_passwordfile
optional: false
# Additional VolumeMounts on the output StatefulSet definition.
volumeMounts:
- name: scrape_passwordfile
mountPath: "/etc/scrape_passwordfile"
Another option is to ditch additionalScrapeConfigs and use additionalScrapeConfigsSecretto store whole config inside secret
## If additional scrape configurations are already deployed in a single secret file you can use this section.
## Expected values are the secret name and key
## Cannot be used with additionalScrapeConfigs
additionalScrapeConfigsSecret: {}
# enabled: false
# name:
# key:
I am trying through Azure DevOps to launch a Pipeline that specifies the label of a specific version of the container (not latest). How can I do that?
previously to this requirement, I used:
helm upgrade --values=$(System.DefaultWorkingDirectory)/<FOLDER/NAME>.yaml --namespace <NAMESPACE> --install --reset-values --wait <NAME> .
I am trying through Azure DevOps to launch a Pipeline that specifies the label of a specific version of the container (not latest). How can I do that?
At the moment, it gives me errors with the flag "--app-version":
2020-06-25T15:43:51.9947356Z Error: unknown flag: --app-version
2020-06-25T15:43:51.9990453Z
2020-06-25T15:43:52.0054964Z ##[error]Bash exited with code '1'.
Maybe, another way is download from the harbor repository and make a helm roll to a version with these TAG. But I can´t find the way. I can´t see that clear.
YML:
# Default values for consent-sandbox.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
nameSpace: <NAME>-pre
image:
repository: <REPO>
pullPolicy: Always
## Uncomment and remove [] to download image private
imagePullSecrets: []
# - name: <namePullSecret>
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
containers:
portName: http
port: 8080
protocol: TCP
env:
APP_NAME: <NAME>
JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=false -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit
WILY_MOM_PORT: 5001
TZ: Europe/Madrid
spring_cloud_config_uri: https://<CONF>.local
spring_application_name: <NAME>
SPRING_CLOUD_CONFIG_PROFILE: pre
envSecrets: {}
livenessProbe: {}
# path: /
# port: 8080
readinessProbe: {}
# path: /
# port: 8080
service:
type: ClusterIP
portName: http
port: 8080
targetPort: 8080
containerPort: 8080
secret:
jks: <JKS>-jks
jssecacerts: jssecacerts
ingress:
enabled: false
route:
enabled: true
status: ""
# Default values for openshift-route.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
annotations:
# kubernetes.io/acme-tls: "true"
# haproxy.router.openshift.io/timeout: 5000ms
# haproxy.router.openshift.io/ip_whitelist: <IP>
labels:
host: "<HOST>.paas.cloudcenter.corp"
path: ""
wildcardPolicy: None
nameOverride: ""
fullnameOverride: ""
tls:
enabled: true
termination: edge
insecureEdgeTerminationPolicy: "None"
key:
certificate:
caCertificate:
destinationCACertificate:
service:
name: "<NAME"
targetPort: http
weight: 100
alternateBackends: []
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 150m
memory: 1444Mi
requests:
cpu: 100m
memory: 1024Mi
nodeSelector: {}
tolerations: []
affinity: {}
Probably, I need add in the YML:
containers:
- name: my_container
image: my_image:latest
imagePullPolicy: "Always"
CHART:
apiVersion: v2
name: examplename
description: testing
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 1.0.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: latest
but...what can I do if I can´t change the YML?
Finally, I use other way with OC client:
oc patch deploy push-engine --type='json' -p '[{ "op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "registry.sdi.dev.weu.azure.paas.cloudcenter.corp/test-dev/test:0.0.1" }]'
I have to install a sonarqube helm chart with postgresql persistence pointing to a external database. This database server is already being used and the chart is configured as below (IP and password changed for security reasons). My ideia is to create a sonarDB database and install the chart. Would it be safe or there would be a risk?
# Default values for sonarqube.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
# This will use the default deployment strategy unless it is overriden
deploymentStrategy: {}
image:
repository: sonarqube
tag: 7.9.1-community
# If using a private repository, the name of the imagePullSecret to use
# pullSecret: my-repo-secret
# Set security context for sonarqube pod
securityContext:
fsGroup: 999
# Settings to configure elasticsearch host requirements
elasticsearch:
configureNode: true
bootstrapChecks: true
service:
type: ClusterIP
externalPort: 9000
internalPort: 9000
labels:
annotations: {}
# May be used in example for internal load balancing in GCP:
# cloud.google.com/load-balancer-type: Internal
# loadBalancerSourceRanges:
# - 0.0.0.0/0
# loadBalancerIP: 1.2.3.4
ingress:
enabled: false
# Used to create an Ingress record.
hosts:
- name: sonar.organization.com
# default paths for "/" and "/*" will be added
path: /
# If a different path is defined, that path and {path}/* will be added to the ingress resource
# path: /sonarqube
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# This property allows for reports up to a certain size to be uploaded to SonarQube
# nginx.ingress.kubernetes.io/proxy-body-size: "8m"
# Additional labels for Ingress manifest file
# labels:
# traffic-type: external
# traffic-type: internal
tls: []
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# hostAliases allows the modification of the hosts file inside a container
hostAliases: []
# - ip: "192.168.1.10"
# hostnames:
# - "example.com"
# - "www.example.com"
readinessProbe:
initialDelaySeconds: 60
periodSeconds: 30
failureThreshold: 6
# If an ingress *path* other than the root (/) is defined, it should be reflected here
# A trailing "/" must be included
sonarWebContext: /
# sonarWebContext: /sonarqube/
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 30
# If an ingress *path* other than the root (/) is defined, it should be reflected here
# A trailing "/" must be included
sonarWebContext: /
# sonarWebContext: /sonarqube/
# Set extra env variables. Like proxy settings.
extraEnv: {}
# If an ingress *path* is defined, it should be reflected here
# sonar.web.context: /sonarqube
# Set annotations for pods
annotations: {}
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
enabled: false
## Set annotations on pvc
annotations: {}
## Specify an existing volume claim instead of creating a new one.
## When using this option all following options like storageClass, accessMode and size are ignored.
#existingClaim: gke-homolog-sonarqube
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass:
accessMode: ReadWriteOnce
size: 10Gi
# List of plugins to install.
# For example:
plugins:
install:
- "https://github.com/sleroy/sonar-slack-notifier-plugin/releases/download/2.5/cks-slack-notifier-2.5.jar"
- "https://repo1.maven.org/maven2/org/sonarsource/java/sonar-java-plugin/5.14.0.18788/sonar-java-plugin-5.14.0.18788.jar"
#plugins:
#install: []
# initContainerImage: alpine:3.10.3
# deleteDefaultPlugins: true
#resources: {}
# We allow the plugins init container to have a separate resources declaration because
# the initContainer does not take as much resources.
# A custom sonar.properties file can be provided via dictionary.
# For example:
# sonarProperties:
# sonar.forceAuthentication: true
# sonar.security.realm: LDAP
# ldap.url: ldaps://organization.com
# Additional sonar properties to load from a secret with a key "secret.properties" (must be a string)
# sonarSecretProperties:
# Kubernetes secret that contains the encryption key for the sonarqube instance.
# The secret must contain the key 'sonar-secret.txt'.
# The 'sonar.secretKeyPath' property will be set automatically.
# sonarSecretKey: "settings-encryption-secret"
customCerts:
## Enable to override the default cacerts with your own one
enabled: false
secretName: my-cacerts
## Configuration value to select database type
## Option to use "postgresql" or "mysql" database type, by default "postgresql" is chosen
## Set the "enable" field to true of the database type you select (if you want to use internal database) and false of the one you don't select
#database:
# type: "postgresql"
## Configuration values for postgresql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
postgresql:
# Enable to deploy the PostgreSQL chart
enabled: false
# To use an external PostgreSQL instance, set enabled to false and uncomment
# the line below:
postgresServer: "11.31.76.3"
# To use an external secret for the password for an external PostgreSQL
# instance, set enabled to false and provide the name of the secret on the
# line below:
# postgresPasswordSecret: ""
postgresUser: "application"
postgresPassword: "pass123"
postgresDatabase: "sonarDB"
# Specify the TCP port that PostgreSQL should use
service:
port: 5432
## Configuration values for the mysql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/mysql/README.md
##
mysql:
# Enable to deploy the mySQL chart
enabled: false
# To use an external mySQL instance, set enabled to false and uncomment
# the line below:
# mysqlServer: ""
# To use an external secret for the password for an external mySQL instance,
# set enabled to false and provide the name of the secret on the line below:
# mysqlPasswordSecret: ""
mysqlUser: "sonarUser"
mysqlPassword: "sonarPass"
mysqlDatabase: "sonarDB"
# mysqlParams:
# useSSL: "true"
# Specify the TCP port that mySQL should use
service:
port: 3306
#
# Additional labels to add to the pods:
# podLabels:
# key: value
podLabels: {}
# For compatibility with 8.0 replace by "/opt/sq"
sonarqubeFolder: /opt/sonarqube
If you match the current version of Sonarqube your existing database is using, then I doubt that you'd have a problem. The helm chart out of the box brings in a community edition. So get the correct image tag to use from docker hub.
I read in the envconsul documentation this:
For additional security, tokens may also be read from the environment
using the CONSUL_TOKEN or VAULT_TOKEN environment variables
respectively. It is highly recommended that you do not put your tokens
in plain-text in a configuration file.
So, I have this envconsul.hcl file:
# the settings to connect to vault server
# "http://10.0.2.2:8200" is the Vault's address on the host machine when using Minikube
vault {
address = "${env(VAULT_ADDR)}"
renew_token = false
retry {
backoff = "1s"
}
token = "${env(VAULT_TOKEN)}"
}
# the settings to find the endpoint of the secrets engine
secret {
no_prefix = true
path = "secret/app/config"
}
However, I get this error:
[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Get $%7Benv%28VAULT_ADDR%29%7D/v1/secret/app/config: unsupported protocol scheme "" (retry attempt 1 after "1s")
As I understand it, it cannot do the variable substitution.
I tried to set "http://10.0.2.2:8200" and it works.
The same happens with the VAULT_TOKEN var.
If I hardcode the VAULT_ADDR, then I get this error:
[WARN] (view) vault.read(secret/app/config): vault.read(secret/app/config): Error making API request.
URL: GET http://10.0.2.2:8200/v1/secret/app/config
Code: 403. Errors:
* permission denied (retry attempt 2 after "2s")
Is there a way for this file to understand the environmental variables?
EDIT 1
This is my pod.yml file
---
apiVersion: v1
kind: Pod
metadata:
name: sample
spec:
serviceAccountName: vault-auth
restartPolicy: Never
# Add the ConfigMap as a volume to the Pod
volumes:
- name: vault-token
emptyDir:
medium: Memory
# Populate the volume with config map data
- name: config
configMap:
# `name` here must match the name
# specified in the ConfigMap's YAML
# -> kubectl create configmap vault-cm --from-file=./vault-configs/
name: vault-cm
items:
- key : vault-agent-config.hcl
path: vault-agent-config.hcl
- key : envconsul.hcl
path: envconsul.hcl
initContainers:
# Vault container
- name: vault-agent-auth
image: vault
volumeMounts:
- name: vault-token
mountPath: /home/vault
- name: config
mountPath: /etc/vault
# This assumes Vault running on local host and K8s running in Minikube using VirtualBox
env:
- name: VAULT_ADDR
value: http://10.0.2.2:8200
# Run the Vault agent
args:
[
"agent",
"-config=/etc/vault/vault-agent-config.hcl",
"-log-level=debug",
]
containers:
- name: python
image: myappimg
imagePullPolicy: Never
ports:
- containerPort: 5000
volumeMounts:
- name: vault-token
mountPath: /home/vault
- name: config
mountPath: /etc/envconsul
env:
- name: HOME
value: /home/vault
- name: VAULT_ADDR
value: http://10.0.2.2:8200
I. Within container specification set environmental variables (values in double quotes):
env:
- name: VAULT_TOKEN
value: "abcd1234"
- name: VAULT_ADDR
value: "http://10.0.2.2:8200"
Then refer to the values in envconsul.hcl
vault {
address = ${VAULT_ADDR}
renew_token = false
retry {
backoff = "1s"
}
token = ${VAULT_TOKEN}
}
II. Another option is to unseal the vault cluster (with the unseal key which was printed while initializing the vault cluster)
$ vault operator unseal
and then authenticate to the vault cluster using a root token.
$ vault login <your-generated-root-token>
More details
I tried many suggestions and nothing worked until I passed -vault-token argument to envconsul command like this:
envconsul -vault-token=$VAULT_TOKEN -config=/app/config.hcl -secret="/secret/debug/service" env
and in config.hcl it should be like this:
vault {
address = "http://kvstorage.try.direct:8200"
token = "${env(VAULT_TOKEN)}"
}