Im trying to deploy telegraf helm chart on kubernetes.
helm upgrade --install telegraf-instance -f values.yaml influxdata/telegraf
When I add modbus input plugin with holding_register i get error
[telegraf] Error running agent: Error loading config file /etc/telegraf/telegraf.conf: Error parsing data: line 49: key `name’ is in conflict with line 2fd
my values.yaml like below
## Default values.yaml for Telegraf
## This is a YAML-formatted file.
## ref: https://hub.docker.com/r/library/telegraf/tags/
replicaCount: 1
image:
repo: "telegraf"
tag: "1.21.4"
pullPolicy: IfNotPresent
podAnnotations: {}
podLabels: {}
imagePullSecrets: []
args: []
env:
- name: HOSTNAME
value: "telegraf-polling-service"
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
service:
enabled: true
type: ClusterIP
annotations: {}
rbac:
create: true
clusterWide: false
rules: []
serviceAccount:
create: false
name:
annotations: {}
config:
agent:
interval: 60s
round_interval: true
metric_batch_size: 1000000
metric_buffer_limit: 100000000
collection_jitter: 0s
flush_interval: 60s
flush_jitter: 0s
precision: ''
hostname: '9825128'
omit_hostname: false
processors:
- enum:
mapping:
field: "status"
dest: "status_code"
value_mappings:
healthy: 1
problem: 2
critical: 3
inputs:
- modbus:
name: "PS MAIN ENGINE"
controller: 'tcp://192.168.0.101:502'
slave_id: 1
holding_registers:
- name: "Coolant Level"
byte_order: CDAB
data_type: FLOAT32
scale: 0.001
address: [51410, 51411]
- modbus:
name: "SB MAIN ENGINE"
controller: 'tcp://192.168.0.102:502'
slave_id: 1
holding_registers:
- name: "Coolant Level"
byte_order: CDAB
data_type: FLOAT32
scale: 0.001
address: [51410, 51411]
outputs:
- influxdb_v2:
token: token
organization: organisation
bucket: bucket
urls:
- "url"
metrics:
health:
enabled: true
service_address: "http://:8888"
threshold: 5000.0
internal:
enabled: true
collect_memstats: false
pdb:
create: true
minAvailable: 1
Problem resolved by doing the following steps
deleted config section of my values.yaml
added my telegraf.conf to /additional_config path
added configmap to kubernetes with the following command
kubectl create configmap external-config --from-file=/additional_config
added the following command to values.yaml
volumes:
- name: my-config
configMap:
name: external-config
volumeMounts:
- name: my-config
mountPath: /additional_config
args:
- "--config=/etc/telegraf/telegraf.conf"
- "--config-directory=/additional_config"
Related
I've deployed promtail, Grafana, Loki and AlertManager using Helm charts on k3d cluster on my local machine. I would like to have some rules in Loki such that if something will happen, AlertManager should be informed. Now I tried only with some simple rule, just to check if it works.
My Loki version: {"version":"2.6.1","revision":"6bd05c9a4","branch":"HEAD","buildUser":"root#ea1e89b8da02","buildDate":"2022-07-18T08:49:07Z","goVersion":""}
My Grafana version:
And the Loki configuration:
loki:
# should loki be deployed on cluster?
enabled: true
image:
repository: grafana/loki
pullPolicy: Always
pullSecrets:
- registry
priorityClassName: normal
resources:
limits:
memory: 3Gi
cpu: 0
requests:
memory: 0
cpu: 0
config:
chunk_store_config:
max_look_back_period: 30d
table_manager:
retention_deletes_enabled: true
retention_period: 30d
query_range:
split_queries_by_interval: 0
parallelise_shardable_queries: false
querier:
max_concurrent: 2048
frontend:
max_outstanding_per_tenant: 4096
compress_responses: true
ingester:
wal:
enabled: true
dir: /tmp/wal
schema_config:
configs:
- from: 2022-12-05
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
storage_config:
boltdb_shipper:
active_index_directory: /tmp/loki/boltdb-shipper-active
cache_location: /tmp/loki/boltdb-shipper-cache
cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space
shared_store: filesystem
filesystem:
directory: /tmp/loki/chunks
compactor:
working_directory: /tmp/loki/boltdb-shipper-compactor
shared_store: filesystem
ruler:
storage:
type: local
local:
directory: /tmp/loki/rules/
ring:
kvstore:
store: inmemory
rule_path: /tmp/loki/rules-temp
alertmanager_url: http://onprem-kube-prometheus-alertmanager.svc.mylocal-monitoring:9093
enable_api: true
enable_alertmanager_v2: true
write:
extraVolumeMounts:
- name: rules-config
mountPath: /tmp/loki/rules/fake/
extraVolumes:
- name: rules-config
configMap:
name: rules-cfgmap
items:
- key: "rules.yaml"
path: "rules.yaml"
read:
extraVolumeMounts:
- name: rules-config
mountPath: /tmp/loki/rules/fake/
extraVolumes:
- name: rules-config
configMap:
name: rules-cfgmap
items:
- key: "rules.yaml"
path: "rules.yaml"
promtail:
image:
registry: docker
pullPolicy: Always
imagePullSecrets:
- name: registry
priorityClassName: normal
resources:
limits:
memory: 256Mi
cpu: 0
requests:
memory: 0
cpu: 0
livenessProbe:
failureThreshold: 5
httpGet:
path: /ready
port: http-metrics
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
config:
snippets:
pipelineStages:
- cri: {}
common:
- action: replace
source_labels:
- __meta_kubernetes_pod_node_name
target_label: node_name
- action: replace
source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- action: replace
source_labels:
- __meta_kubernetes_pod_container_name
target_label: container
- action: replace
replacement: /var/log/pods/*$1/*.log
separator: /
source_labels:
- __meta_kubernetes_pod_uid
- __meta_kubernetes_pod_container_name
target_label: __path__
- action: replace
replacement: /var/log/pods/*$1/*.log
regex: true/(.*)
separator: /
source_labels:
- __meta_kubernetes_pod_annotationpresent_kubernetes_io_config_hash
- __meta_kubernetes_pod_annotation_kubernetes_io_config_hash
- __meta_kubernetes_pod_container_name
target_label: __path__
monitoring:
enabled: false
networkPolicies:
enabled: false
The problem is, when I want to check the rules, doing curl -X GET localhost:3100/loki/api/v1/rules it shows me: unable to read rule dir /tmp/loki/rules/fake: open /tmp/loki/rules/fake: no such file or directory.
So it seems it can't find the rules file.
I also tried to change the config like this:
write:
extraVolumeMounts:
- name: rules-conf
mountPath: /tmp/loki/rules/fake/rules.yaml
extraVolumes:
- name: rules-conf
read:
extraVolumeMounts:
- name: rules-conf
mountPath: /tmp/loki/rules/fake/rules.yaml
extraVolumes:
- name: rules-conf
And my configmap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rules-cfgmap
namespace: mylocal-monitoring
data:
rules.yaml: |
groups:
- name: PrometheusAlertsGroup
rules:
- alert: test1
expr: |
1 > 0
for: 0m
labels:
severity: critical
annotations:
summary: TEST: testing test
description: test
And the rules file:
groups:
- name: PrometheusAlertsGroup
rules:
- alert: test1
expr: |
1 > 0
for: 0m
labels:
severity: critical
annotations:
summary: TEST: testing test
description: test
But the problem is the same. Any ideas?
It finally worked when I created /tmp/loki/rules/fake/rules.yaml manually, but thats not the point to create it manually.
I am working on a use-case where I need to add a new container in jupyterhub pod , This new container (sidecontainer) monitors jupyterhub directories.
Jupyterhub container while comming up creates a dynamic PV , see the section below
singleuser:
podNameTemplate:
extraTolerations: []
nodeSelector: {}
extraNodeAffinity:
required: []
preferred: []
extraPodAffinity:
required: []
preferred: []
extraPodAntiAffinity:
required: []
preferred: []
networkTools:
image:
name: jupyterhub/k8s-network-tools
tag: "set-by-chartpress"
pullPolicy:
pullSecrets: []
resources: {}
cloudMetadata:
# block set to true will append a privileged initContainer using the
# iptables to block the sensitive metadata server at the provided ip.
blockWithIptables: true
ip: 169.254.169.254
networkPolicy:
enabled: true
ingress: []
egress:
# Required egress to communicate with the hub and DNS servers will be
# augmented to these egress rules.
#
# This default rule explicitly allows all outbound traffic from singleuser
# pods, except to a typical IP used to return metadata that can be used by
# someone with malicious intent.
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32
interNamespaceAccessLabels: ignore
allowedIngressPorts: []
events: true
extraAnnotations: {}
extraLabels:
hub.jupyter.org/network-access-hub: "true"
extraFiles: {}
extraEnv: {}
lifecycleHooks: {}
initContainers: []
extraContainers: []
uid: 1000
fsGid: 100
serviceAccountName:
storage:
type: dynamic
extraLabels: {}
extraVolumes: []
extraVolumeMounts: []
static:
pvcName:
subPath: "{username}"
capacity: 10Gi
homeMountPath: /home/jovyan
dynamic:
storageClass:
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
image:
name: jupyterhub/k8s-singleuser-sample
tag: "set-by-chartpress"
pullPolicy:
pullSecrets: []
startTimeout: 300
cpu:
limit:
guarantee:
memory:
limit:
guarantee: 1G
extraResource:
limits: {}
guarantees: {}
cmd:
defaultUrl:
extraPodConfig: {}
profileList: []
I have added my new container in the extraContainer section of deployment file , my container does start but the dynamic PV is not mounted on that container.
Is the use-case I am trying to achieve technically possible at kubernetes level.
full yaml file is present here for reference
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/main/jupyterhub/values.yaml
Config Map for reference
singleuser:
baseUrl: /
cloudMetadata:
enabled: true
ip: xx.xx.xx.xx
cpu: {}
events: true
extraAnnotations: {}
extraConfigFiles:
config_files:
- cm_key: ''
content: ''
file_path: ''
- cm_key: ''
content: ''
file_path: ''
enabled: false
extraContainers:
- image: 'docker:19.03-rc-dind'
lifecycle:
postStart:
exec:
command:
- sh
- '-c'
- update-ca-certificates; echo Certificates Updated
name: dind
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /usr/local/share/ca-certificates/
name: docker-cert
extraEnv:
ACTUAL_HADOOP_CONF_DIR: ''
ACTUAL_HIVE_CONF_DIR: ''
ACTUAL_SPARK_CONF_DIR: ''
CDH_PARCEL_DIR: ''
DOCKER_HOST: ''
JAVA_HOME:
LIVY_URL: ''
SPARK2_PARCEL_DIR: ''
extraLabels:
hub.jupyter.org/network-access-hub: 'true'
extraNodeAffinity:
preferred: []
required: []
extraPodAffinity:
preferred: []
required: []
extraPodAntiAffinity:
preferred: []
required: []
extraPodConfig: {}
extraResource:
guarantees: {}
limits: {}
extraTolerations: []
fsGid: 0
image:
name: >-
/jupyterhub/jpt-spark-magic
pullPolicy: IfNotPresent
tag: xxx
imagePullSecret:
email: null
enabled: false
registry: null
username: null
initContainers: []
lifecycleHooks: {}
memory:
guarantee: 1G
networkPolicy:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32
enabled: false
ingress: []
networkTools:
image:
name: >-
/k8s-hub-multispawn
pullPolicy: IfNotPresent
tag: '12345'
nodeSelector: {}
profileList:
- description: Python for data enthusiasts
display_name: 0
kubespawner_override:
cmd:
- jpt-entry-cmd.sh
cpu_limit: 1
environment:
XYZ_SERVICE_URL: 'http://XYZ:8080'
CURL_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
DOCKER_HOST: 'tcp://localhost:2375'
HADOOP_CONF_DIR: /etc/hadoop/conf
HADOOP_HOME: /usr/hdp/3.1.5.6091-7/hadoop/
HDP_DIR: /usr/hdp/3.1.5.6091-7
HDP_HOME_DIR: /usr/hdp/3.1.5.6091-7
HDP_VERSION: 3.1.5.6091-7
HIVE_CONF_DIR: /usr/hdp/3.1.5.6091-7/hive
HIVE_HOME: /usr/hdp/3.1.5.6091-7/hive
INTEGRATION_ENV: HDP3
JAVA_HOME: /usr/jdk64/jdk1.8.0_112
LD_LIBRARY_PATH: >-
/usr/hdp/3.1.5.6091-7/hadoop/lib/native:/usr/jdk64/jdk1.8.0_112/jre:/usr/hdp/3.1.5.6091-7/usr/lib/:/usr/hdp/3.1.5.6091-7/usr/lib/
LIVY_URL: 'http://ammaster01.fake.org:8999'
MLFLOW_TRACKING_URI: 'http://mlflow:5100'
NO_PROXY: mlflow
SPARK_CONF_DIR: /etc/spark2/conf
SPARK_HOME: /usr/hdp/3.1.5.6091-7/spark2
SPARK2_PARCEL_DIR: /usr/hdp/3.1.5.6091-7/spark2
TOOLS_BASE_PATH: /usr/local/bin
image: >-
/jupyterhub/jpt-spark-magic:1.1.2
mem_limit: 4096M
uid: 0
- description: R for data enthusiasts
display_name: 1
kubespawner_override:
cmd:
- start-all.sh
environment:
XYZ_SERVICE_URL: 'http://XYZ-service:8080'
DISABLE_AUTH: 'true'
XYZ: /home/rstudio/kitematic
image: '/jupyterhub/rstudio:364094'
uid: 0
- description: Python for data enthusiasts test2
display_name: 2
kubespawner_override:
cmd:
- jpt-entry-cmd.sh
cpu_limit: 4
environment:
XYZ_SERVICE_URL: 'http://XYZ-service:8080'
CURL_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
DOCKER_HOST: 'tcp://localhost:2375'
HADOOP_CONF_DIR: /etc/hadoop/conf
HADOOP_HOME: /usr/hdp/3.1.5.6091-7/hadoop/
HDP_DIR: /usr/hdp/3.1.5.6091-7
HDP_HOME_DIR: /usr/hdp/3.1.5.6091-7
HDP_VERSION: 3.1.5.6091-7
HIVE_CONF_DIR: /usr/hdp/3.1.5.6091-7/hive
HIVE_HOME: /usr/hdp/3.1.5.6091-7/hive
INTEGRATION_ENV: HDP3
JAVA_HOME: /usr/jdk64/jdk1.8.0_112
LD_LIBRARY_PATH: >-
/usr/hdp/3.1.5.6091-7/hadoop/lib/native:/usr/jdk64/jdk1.8.0_112/jre:/usr/hdp/3.1.5.6091-7/usr/lib/:/usr/hdp/3.1.5.6091-7/usr/lib/
LIVY_URL: 'http://xyz:8999'
MLFLOW_TRACKING_URI: 'http://mlflow:5100'
NO_PROXY: mlflow
SPARK_CONF_DIR: /etc/spark2/conf
SPARK_HOME: /usr/hdp/3.1.5.6091-7/spark2
SPARK2_PARCEL_DIR: /usr/hdp/3.1.5.6091-7/spark2
TOOLS_BASE_PATH: /usr/local/bin
image: >-
/jupyterhub/jpt-spark-magic:1.1.2
mem_limit: 8192M
uid: 0
- description: Python for data enthusiasts test3
display_name: 3
kubespawner_override:
cmd:
- jpt-entry-cmd.sh
cpu_limit: 8
environment:
XYZ_SERVICE_URL: 'http://XYZ-service:8080'
CURL_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
DOCKER_HOST: 'tcp://localhost:2375'
HADOOP_CONF_DIR: /etc/hadoop/conf
HADOOP_HOME: /usr/hdp/3.1.5.6091-7/hadoop/
HDP_DIR: /usr/hdp/3.1.5.6091-7
HDP_HOME_DIR: /usr/hdp/3.1.5.6091-7
HDP_VERSION: 3.1.5.6091-7
HIVE_CONF_DIR: /usr/hdp/3.1.5.6091-7/hive
HIVE_HOME: /usr/hdp/3.1.5.6091-7/hive
INTEGRATION_ENV: HDP3
JAVA_HOME: /usr/jdk64/jdk1.8.0_112
LD_LIBRARY_PATH: >-
/usr/hdp/3.1.5.6091-7/hadoop/lib/native:/usr/jdk64/jdk1.8.0_112/jre:/usr/hdp/3.1.5.6091-7/usr/lib/:/usr/hdp/3.1.5.6091-7/usr/lib/
LIVY_URL: 'http://fake.org:8999'
MLFLOW_TRACKING_URI: 'http://mlflow:5100'
NO_PROXY: mlflow
SPARK_CONF_DIR: /etc/spark2/conf
SPARK_HOME: /usr/hdp/3.1.5.6091-7/spark2
SPARK2_PARCEL_DIR: /usr/hdp/3.1.5.6091-7/spark2
TOOLS_BASE_PATH: /usr/local/bin
image: >-
/jupyterhub/jpt-spark-magic:1.1.2
mem_limit: 16384M
uid: 0
startTimeout: 300
storage:
capacity: 10Gi
dynamic:
pvcNameTemplate: 'claim-{username}{servername}'
storageAccessModes:
- ReadWriteOnce
storageClass: nfs-client
volumeNameTemplate: 'volume-{username}{servername}'
extraLabels: {}
extraVolumeMounts:
- mountPath: /etc/krb5.conf
name: krb
readOnly: true
- mountPath: /usr/jdk64/jdk1.8.0_112
name: java-home
readOnly: true
- mountPath: /xyz/conda/envs
name: xyz-conda-envs
readOnly: false
- mountPath: /usr/hdp/
name: bigdata
readOnly: true
subPath: usr-hdp
- mountPath: /etc/hadoop/
name: bigdata
readOnly: true
subPath: HDP
- mountPath: /etc/hive/
name: bigdata
readOnly: true
subPath: hdp-hive
- mountPath: /etc/spark2/
name: bigdata
readOnly: true
subPath: hdp-spark2
extraVolumes:
- emptyDir: {}
name: dind-storage
- name: docker-cert
secret:
secretName: docker-cert
- hostPath:
path: /var/lib/ut_xyz_ts/jdk1.8.0_112
type: Directory
name: java-home
- hostPath:
path: /xyz/conda/envs
type: Directory
name: xyz-conda-envs
- hostPath:
path: /etc/krb5.conf
type: File
name: krb
- name: bigdata
persistentVolumeClaim:
claimName: bigdata
homeMountPath: '/home/{username}'
static:
subPath: '{username}'
type: dynamic
uid: 0
Thanks in advance ..
For you question, whether technically two or more containers of same pod share same volume, the answer in Yes. Refer here - https://youtu.be/GQJP9QdHHs8?t=82 .
But you should have a volumeMount (refer example in the video as well) defined in your extra containers spec as well. If you can check that, or share output of kubectl describe deployment <your-deployment> I can confirm it.
I created a pod following a RedHat blog post and created a subsequent pod using the YAML file
Post: https://www.redhat.com/sysadmin/compose-podman-pods
When creating the pod using the commands, the pod works fine (can access localhost:8080)
When creating the pod using the YAML file, I get error 403 forbidden
I have tried this on two different hosts (both creating pod from scratch and using YAML), deleting all images and pod each time to make sure nothing was influencing the process
I'm using podman 2.0.4 on Ubuntu 20.04
Commands:
podman create --name wptestpod -p 8080:80
podman run \
-d --restart=always --pod=wptestpod \
-e MYSQL_ROOT_PASSWORD="myrootpass" \
-e MYSQL_DATABASE="wp" \
-e MYSQL_USER="wordpress" \
-e MYSQL_PASSWORD="w0rdpr3ss" \
--name=wptest-db mariadb
podman run \
-d --restart=always --pod=wptestpod \
-e WORDPRESS_DB_NAME="wp" \
-e WORDPRESS_DB_USER="wordpress" \
-e WORDPRESS_DB_PASSWORD="w0rdpr3ss" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
--name wptest-web wordpress
Original YAML file from podman generate kube wptestpod > wptestpod.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.0.4
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2020-08-26T17:02:56Z'
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- command:
- apache2-foreground
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_USER
value: wordpress
- name: APACHE_CONFDIR
value: /etc/apache2
- name: PHP_LDFLAGS
value: -Wl,-O1 -pie
- name: PHP_VERSION
value: 7.4.9
- name: PHP_EXTRA_CONFIGURE_ARGS
value: --with-apxs2 --disable-cgi
- name: GPG_KEYS
value: 42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
- name: WORDPRESS_DB_PASSWORD
value: t3stp4ssw0rd
- name: APACHE_ENVVARS
value: /etc/apache2/envvars
- name: PHP_ASC_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz.asc
- name: PHP_SHA256
value: 23733f4a608ad1bebdcecf0138ebc5fd57cf20d6e0915f98a9444c3f747dc57b
- name: PHP_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: PHP_CPPFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: PHP_MD5
- name: PHP_EXTRA_BUILD_DEPS
value: apache2-dev
- name: PHP_CFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: WORDPRESS_SHA1
value: 03fe1a139b3cd987cc588ba95fab2460cba2a89e
- name: PHPIZE_DEPS
value: "autoconf \t\tdpkg-dev \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkg-config \t\tre2c"
- name: WORDPRESS_VERSION
value: '5.5'
- name: PHP_INI_DIR
value: /usr/local/etc/php
- name: HOSTNAME
value: wptestpod
image: docker.io/library/wordpress:latest
name: wptest-web
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: MYSQL_PASSWORD
value: t3stp4ssw0rd
- name: GOSU_VERSION
value: '1.12'
- name: GPG_KEYS
value: 177F4010FE56CA3336300305F1656F24C74CD1D8
- name: MARIADB_MAJOR
value: '10.5'
- name: MYSQL_ROOT_PASSWORD
value: t3stp4ssw0rd
- name: MARIADB_VERSION
value: 1:10.5.5+maria~focal
- name: MYSQL_DATABASE
value: wp
- name: MYSQL_USER
value: wordpress
- name: HOSTNAME
value: wptestpod
image: docker.io/library/mariadb:latest
name: wptest-db
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
---
metadata:
creationTimestamp: null
spec: {}
status:
loadBalancer: {}
YAML file with certain envs removed (taken from blog post):
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.9.3
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-07-01T20:17:42Z"
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Can anyone see why this pod would not work when created using the YAML file, but works fine when created using the commands? It seems like a good workflow, but it's useless if the pods produced with the YAML are non-functional.
I found the same article, and the same problem than you. None of the following tests worked for me:
Add and remove environment variables
Add and remove restartPolicy part
Play with the capabilities part
As soon as you move back the command part, everything fires up again.
Check it with the following wordpress.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: wordpress-pod
name: wordpress-pod
spec:
containers:
- command:
- apache2-foreground
name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Play & checks:
# Create containers, pod and run everything
$ podman play kube wordpress.yaml
# Output
Pod:
5a211c35419b4fcf0deda718e47eec2dd10653a5c5bacc275c312ae75326e746
Containers:
bfd087b5649f8d1b3c62ef86f28f4bcce880653881bcda21823c09e0cca1c85b
5aceb11500db0a91b4db2cc4145879764e16ed0e8f95a2f85d9a55672f65c34b
# Check running state
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aceb11500db docker.io/library/mariadb:latest mysqld 13 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-db
bfd087b5649f docker.io/library/wordpress:latest apache2-foregroun... 16 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-web
d8bf33eede43 k8s.gcr.io/pause:3.2 19 seconds ago Up 11 seconds ago 0.0.0.0:8080->80/tcp 5a211c35419b-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
5a211c35419b wordpress-pod Running 20 seconds ago d8bf33eede43 3
A bit more explanation about the bug:
The problem is that entrypoint and cmd are not parsed correctly from the images, as it should and you would expect. It was working on previous versions, and it is already identified and fixed for the future ones.
For complete reference:
Comment found at podman#8710-comment.748672710 breaks this problem into two pieces:
"make podman play use ENVs from image" (podman#8654 already fixed in mainstream)
"podman play should honour both ENTRYPOINT and CMD from image" (podman#8666)
This one is replaced by "play kube: fix args/command handling" (podman#8807 the one already merged to mainstream)
I am running Airflow on Kubernetes from the stable helm chart. I'm running this in an AWS environment. This error exists with and without mounting any external volumes for log storage. I tried to set the configuration of the [logs] section to point to an EFS volume that I created. The PV gets mounted through a PVC but my containers are crashing (scheduler and web) due to the following error:
*** executing Airflow initdb...
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/logging/config.py", line 565, in configure
handler = self.configure_handler(handlers[name])
File "/usr/local/lib/python3.6/logging/config.py", line 738, in configure_handler
result = factory(**kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/log/file_processor_handler.py", line 50, in __init__
os.makedirs(self._get_log_directory())
File "/usr/local/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/opt/airflow/logs/scheduler/2020-08-20'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 25, in <module>
from airflow.configuration import conf
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/__init__.py", line 47, in <module>
settings.initialize()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/settings.py", line 374, in initialize
LOGGING_CLASS_PATH = configure_logging()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/logging_config.py", line 68, in configure_logging
raise e
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/logging_config.py", line 63, in configure_logging
dictConfig(logging_config)
File "/usr/local/lib/python3.6/logging/config.py", line 802, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/lib/python3.6/logging/config.py", line 573, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'processor': [Errno 13] Permission denied: '/opt/airflow/logs/scheduler/2020-08-20'
Persistent volume (created manually not from the stable/airflow chart)
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":["ReadWriteMany"],"capacity":{"storage":"5Gi"},"csi":{"driver":"efs.csi.aws.com","volumeHandle":"fs-e476a166"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"efs-sc","volumeMode":"Filesystem"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-08-20T15:47:21Z"
finalizers:
- kubernetes.io/pv-protection
name: efs-pv
resourceVersion: "49476860"
selfLink: /api/v1/persistentvolumes/efs-pv
uid: 45d9f5ea-66c1-493e-a2f5-03e17f397747
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: efs-claim
namespace: airflow
resourceVersion: "49476857"
uid: 354103ea-f8a9-47f1-a7cf-8f449f9a2e8b
csi:
driver: efs.csi.aws.com
volumeHandle: fs-e476a166
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
Persistent Volume Claim for logs (created manually not from the stable/airflow chart):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"efs-claim","namespace":"airflow"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"efs-sc"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-08-20T15:47:46Z"
finalizers:
- kubernetes.io/pvc-protection
name: efs-claim
namespace: airflow
resourceVersion: "49476866"
selfLink: /api/v1/namespaces/airflow/persistentvolumeclaims/efs-claim
uid: 354103ea-f8a9-47f1-a7cf-8f449f9a2e8b
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: efs-sc
volumeMode: Filesystem
volumeName: efs-pv
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
phase: Bound
My values.yaml below:
airflow:
image:
repository: apache/airflow
tag: 1.10.10-python3.6
## values: Always or IfNotPresent
pullPolicy: IfNotPresent
pullSecret: ""
executor: KubernetesExecutor
fernetKey: "XXXXXXXXXHIVb8jK6lfmSAvx4mO6Arehnc="
config:
AIRFLOW__CORE__REMOTE_LOGGING: "True"
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: "s3://mybucket/airflow/logs"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: "MyS3Conn"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: "apache/airflow"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: "1.10.10-python3.6"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: "IfNotPresent"
AIRFLOW__KUBERNETES__WORKER_PODS_CREATION_BATCH_SIZE: "10"
AIRFLOW__KUBERNETES__LOGS_VOLUME_CLAIM: "efs-claim"
AIRFLOW__KUBERNETES__GIT_REPO: "git#github.com:org/myrepo.git"
AIRFLOW__KUBERNETES__GIT_BRANCH: "develop"
AIRFLOW__KUBERNETES__GIT_DAGS_FOLDER_MOUNT_POINT: "/opt/airflow/dags"
AIRFLOW__KUBERNETES__DAGS_VOLUME_SUBPATH: "repo/"
AIRFLOW__KUBERNETES__GIT_SSH_KEY_SECRET_NAME: "airflow-git-keys"
AIRFLOW__KUBERNETES__NAMESPACE: "airflow"
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: "True"
AIRFLOW__KUBERNETES__RUN_AS_USER: "50000"
AIRFLOW__CORE__LOAD_EXAMPLES: "False"
AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL: "60"
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: "airflow"
podAnnotations: {}
extraEnv: []
extraConfigmapMounts: []
extraContainers: []
extraPipPackages: []
extraVolumeMounts: []
extraVolumes: []
scheduler:
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
podDisruptionBudget:
enabled: true
maxUnavailable: "100%"
minAvailable: ""
connections:
- id: MyS3Conn
type: aws
extra: |
{
"aws_access_key_id": "XXXXXXXXX",
"aws_secret_access_key": "XXXXXXXX",
"region_name":"us-west-1"
}
refreshConnections: true
variables: |
{}
pools: |
{}
numRuns: -1
initdb: true
preinitdb: false
initialStartupDelay: 0
extraInitContainers: []
web:
resources: {}
replicas: 1
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
service:
annotations: {}
sessionAffinity: "None"
sessionAffinityConfig: {}
type: ClusterIP
externalPort: 8080
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePort:
http: ""
baseUrl: "http://localhost:8080"
serializeDAGs: false
extraPipPackages: []
initialStartupDelay: 0
minReadySeconds: 5
readinessProbe:
enabled: false
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
livenessProbe:
enabled: true
scheme: HTTP
initialDelaySeconds: 300
periodSeconds: 30
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 2
secretsDir: /var/airflow/secrets
secrets: []
secretsMap:
workers:
enabled: false
resources: {}
replicas: 1
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
autoscaling:
enabled: false
maxReplicas: 2
metrics: []
initialStartupDelay: 0
celery:
instances: 1
gracefullTermination: false
gracefullTerminationPeriod: 600
terminationPeriod: 60
secretsDir: /var/airflow/secrets
secrets: []
secretsMap:
flower:
enabled: false
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
basicAuthSecret: ""
basicAuthSecretKey: ""
urlPrefix: ""
service:
annotations: {}
type: ClusterIP
externalPort: 5555
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePort:
http: ""
initialStartupDelay: 0
extraConfigmapMounts: []
logs:
path: /opt/airflow/logs
persistence:
enabled: true
existingClaim: efs-claim
subPath: ""
storageClass: efs-sc
accessMode: ReadWriteMany
size: 1Gi
dags:
path: /opt/airflow/dags
doNotPickle: false
installRequirements: false
persistence:
enabled: false
existingClaim: ""
subPath: ""
storageClass: ""
accessMode: ReadOnlyMany
size: 1Gi
git:
url: git#github.com:org/myrepo.git
ref: develop
secret: airflow-git-keys
sshKeyscan: false
privateKeyName: id_rsa
repoHost: github.com
repoPort: 22
gitSync:
enabled: true
resources: {}
image:
repository: alpine/git
tag: latest
pullPolicy: Always
refreshTime: 60
initContainer:
enabled: false
resources: {}
image:
repository: alpine/git
tag: latest
pullPolicy: Always
mountPath: "/dags"
syncSubPath: ""
ingress:
enabled: false
web:
annotations: {}
path: ""
host: ""
livenessPath: ""
tls:
enabled: false
secretName: ""
precedingPaths: []
succeedingPaths: []
flower:
annotations: {}
path: ""
host: ""
livenessPath: ""
tls:
enabled: false
secretName: ""
rbac:
create: true
serviceAccount:
create: true
name: ""
annotations: {}
extraManifests: []
postgresql:
enabled: true
postgresqlDatabase: airflow
postgresqlUsername: postgres
postgresqlPassword: airflow
existingSecret: ""
existingSecretKey: "postgresql-password"
persistence:
enabled: true
storageClass: ""
accessModes:
- ReadWriteOnce
size: 5Gi
externalDatabase:
type: postgres
host: localhost
port: 5432
database: airflow
user: airflow
passwordSecret: ""
passwordSecretKey: "postgresql-password"
redis:
enabled: false
password: airflow
existingSecret: ""
existingSecretKey: "redis-password"
cluster:
enabled: false
slaveCount: 1
master:
resources: {}
persistence:
enabled: false
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
slave:
resources: {}
persistence:
enabled: false
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
externalRedis:
host: localhost
port: 6379
databaseNumber: 1
passwordSecret: ""
passwordSecretKey: "redis-password"
serviceMonitor:
enabled: false
selector:
prometheus: kube-prometheus
path: /admin/metrics
interval: "30s"
prometheusRule:
enabled: false
additionalLabels: {}
groups: []
I'm not really sure what to do here if anyone knows how to fix the permission error.
I have had this issue with the Google Cloud Plateform and the helm airflow 1.2.0 chart (which uses airflow 2).
What ended up working was:
extraInitContainers:
- name: fix-volume-logs-permissions
image: busybox
command: [ "sh", "-c", "chown -R 50000:0 /opt/airflow/logs/" ]
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /opt/airflow/logs/
name: logs
by tweaking based on Ajay's answer. Please note that:
the values 50000:0 are based on uid and gid setup in your values.yaml
you need to use extraInitContainers under scheduler and not worker
"logs" seems to be the volume name automatically used by the helm logging config when enabled
Security context was necessary for me or else the chown failed due to unprivileged rights
You can use extraInitContainers with scheduler to change the permission, something like this.
extraInitContainers:
- name: volume-logs
image: busybox
command: ["sh", "-c", "chown -R 50000:50000 /opt/airflow/logs/"]
volumeMounts:
- mountPath: /opt/airflow/logs/
name: logs-data
This will change permission of the mount point.
you can try modify workers.persistence.fixPermissions: true in values.yaml, it is OK.
I have been trying to create a POD with HELM UPGRADE:
helm upgrade --values=$(System.DefaultWorkingDirectory)/_NAME-deploy-CI/drop/values-NAME.yaml --namespace sda-NAME-pro --install --reset-values --debug --wait NAME .
but running into below error:
2020-07-08T12:51:28.0678161Z upgrade.go:367: [debug] warning: Upgrade "NAME" failed: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
YML part
volumeMounts:
- name: secretvol
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: jks
secret:
secretName: {{ .Values.secret.jks }}
- name: secretvol
secret:
secretName: {{ .Values.secret.secretvol }}
Maybe, the first deploy need another command the first time? how can I specify these value to test it?
TL;DR
The issue you've encountered:
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
is connected with the fact that the variable: {{ .Values.secret.secretvol }} is missing.
To fix it you will need to set this value in either:
Helm command that you are using
File that stores your values in the Helm's chart.
A tip!
You can run your Helm command with --debug --dry-run to output generated YAML's. This should show you where the errors could be located.
There is an official documentation about values in Helm. Please take a look here:
Helm.sh: Docs: Chart template guid: Values files
Basing on
I have been trying to create a POD with HELM UPGRADE:
I've made an example basing on your issue and how you can fix it.
Steps:
Create a helm chart with correct values
Edit the values to reproduce the error
Create a helm chart
For the simplicity of the setup I created basic Helm chart.
Below is the structure of files and directories:
❯ tree helm-dir
helm-dir
├── Chart.yaml
├── templates
│ └── pod.yaml
└── values.yaml
1 directory, 3 files
Create Chart.yaml file
Below is the Chart.yaml file:
apiVersion: v2
name: helm-pod
description: A Helm chart for spawning pod with volumeMount
version: 0.1.0
Create a values.yaml file
Below is the simple values.yaml file which will be used by default in the $ helm install command
usedImage: ubuntu
confidentialName: secret-password # name of the secret in Kubernetes
Create a template for a pod
This template is stored in templates directory with a name pod.yaml
Below YAML definition will be a template for spawned pod:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.usedImage }} # value from "values.yaml"
labels:
app: {{ .Values.usedImage }} # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: {{ .Values.usedImage }} # value from "values.yaml"
image: {{ .Values.usedImage }} # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: {{ .Values.confidentialName }} # value from "values.yaml"
With above example you should be able to run $ helm install --name test-pod .
You should get output similar to this:
NAME: test-pod
LAST DEPLOYED: Thu Jul 9 14:47:46 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME READY STATUS RESTARTS AGE
ubuntu 0/1 ContainerCreating 0 0s
Disclaimer!
The ubuntu pod is in the ContainerCreating state as there is no secret named secret-password in the cluster.
You can get more information about your pods by running:
$ kubectl describe pod POD_NAME
Edit the values to reproduce the error
The error you got as described earlier is most probably connected with the fact that the value: {{ .Values.secret.secretvol }} was missing.
If you were to edit the values.yaml file to:
usedImage: ubuntu
# confidentialName: secret-password # name of the secret in Kubernetes
Notice the added #.
You should get below error when trying to deploy this chart:
Error: release test-pod failed: Pod "ubuntu" is invalid: [spec.volumes[0].secret.secretName: Required value, spec.containers[0].volumeMounts[0].name: Not found: "secretvol"]
I previously mentioned the --debug --dry-run parameters for Helm.
If you run:
$ helm install --name test-pod --debug --dry-run .
You should get the output similar to this (this is only the part):
---
# Source: helm-pod/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu # value from "values.yaml"
labels:
app: ubuntu # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: ubuntu # value from "values.yaml"
image: ubuntu # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: # value from "values.yaml"
As you can see the value of secretName was missing. That's the reason above error was showing up.
secretName: # value from "values.yaml"
Thank you Dawik, here we have the output:
2020-07-10T11:34:26.3090526Z LAST DEPLOYED: Fri Jul 10 11:34:25 2020
2020-07-10T11:34:26.3091661Z NAMESPACE: sda-NAME
2020-07-10T11:34:26.3092410Z STATUS: pending-upgrade
2020-07-10T11:34:26.3092796Z REVISION: 13
2020-07-10T11:34:26.3093182Z TEST SUITE: None
2020-07-10T11:34:26.3093781Z USER-SUPPLIED VALUES:
2020-07-10T11:34:26.3105880Z affinity: {}
2020-07-10T11:34:26.3106801Z containers:
2020-07-10T11:34:26.3107446Z port: 8080
2020-07-10T11:34:26.3108124Z portName: http
2020-07-10T11:34:26.3108769Z protocol: TCP
2020-07-10T11:34:26.3109440Z env:
2020-07-10T11:34:26.3110613Z APP_NAME: NAME
2020-07-10T11:34:26.3112959Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3115219Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3116160Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3116974Z TZ: Europe/Madrid
2020-07-10T11:34:26.3117647Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3119640Z spring_application_name: NAME
2020-07-10T11:34:26.3121048Z spring_cloud_config_uri: URI
2020-07-10T11:34:26.3122038Z envSecrets: {}
2020-07-10T11:34:26.3122789Z fullnameOverride: ""
2020-07-10T11:34:26.3123489Z image:
2020-07-10T11:34:26.3124470Z pullPolicy: Always
2020-07-10T11:34:26.3125908Z repository: NAME-REPO
2020-07-10T11:34:26.3126955Z imagePullSecrets: []
2020-07-10T11:34:26.3127675Z ingress:
2020-07-10T11:34:26.3128727Z enabled: ***
2020-07-10T11:34:26.3129509Z livenessProbe: {}
2020-07-10T11:34:26.3130143Z nameOverride: ""
2020-07-10T11:34:26.3131148Z nameSpace: sda-NAME
2020-07-10T11:34:26.3131820Z nodeSelector: {}
2020-07-10T11:34:26.3132444Z podSecurityContext: {}
2020-07-10T11:34:26.3133135Z readinessProbe: {}
2020-07-10T11:34:26.3133742Z replicaCount: 1
2020-07-10T11:34:26.3134636Z resources:
2020-07-10T11:34:26.3135362Z limits:
2020-07-10T11:34:26.3135865Z cpu: 150m
2020-07-10T11:34:26.3136404Z memory: 1444Mi
2020-07-10T11:34:26.3137257Z requests:
2020-07-10T11:34:26.3137851Z cpu: 100m
2020-07-10T11:34:26.3138391Z memory: 1024Mi
2020-07-10T11:34:26.3138942Z route:
2020-07-10T11:34:26.3139486Z alternateBackends: []
2020-07-10T11:34:26.3140087Z annotations: null
2020-07-10T11:34:26.3140642Z enabled: true
2020-07-10T11:34:26.3141226Z fullnameOverride: ""
2020-07-10T11:34:26.3142695Z host:HOST-NAME
2020-07-10T11:34:26.3143480Z labels: null
2020-07-10T11:34:26.3144217Z nameOverride: ""
2020-07-10T11:34:26.3145137Z path: ""
2020-07-10T11:34:26.3145637Z service:
2020-07-10T11:34:26.3146439Z name: NAME
2020-07-10T11:34:26.3147049Z targetPort: http
2020-07-10T11:34:26.3147607Z weight: 100
2020-07-10T11:34:26.3148121Z status: ""
2020-07-10T11:34:26.3148623Z tls:
2020-07-10T11:34:26.3149162Z caCertificate: null
2020-07-10T11:34:26.3149820Z certificate: null
2020-07-10T11:34:26.3150467Z destinationCACertificate: null
2020-07-10T11:34:26.3151091Z enabled: true
2020-07-10T11:34:26.3151847Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3152483Z key: null
2020-07-10T11:34:26.3153032Z termination: edge
2020-07-10T11:34:26.3154104Z wildcardPolicy: None
2020-07-10T11:34:26.3155687Z secret:
2020-07-10T11:34:26.3156714Z jks: NAME-jks
2020-07-10T11:34:26.3157408Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3157962Z securityContext: {}
2020-07-10T11:34:26.3158490Z service:
2020-07-10T11:34:26.3159127Z containerPort: 8080
2020-07-10T11:34:26.3159627Z port: 8080
2020-07-10T11:34:26.3160103Z portName: http
2020-07-10T11:34:26.3160759Z targetPort: 8080
2020-07-10T11:34:26.3161219Z type: ClusterIP
2020-07-10T11:34:26.3161694Z serviceAccount:
2020-07-10T11:34:26.3162482Z create: ***
2020-07-10T11:34:26.3162990Z name: null
2020-07-10T11:34:26.3163451Z tolerations: []
2020-07-10T11:34:26.3163836Z
2020-07-10T11:34:26.3164534Z COMPUTED VALUES:
2020-07-10T11:34:26.3165022Z affinity: {}
2020-07-10T11:34:26.3165474Z containers:
2020-07-10T11:34:26.3165931Z port: 8080
2020-07-10T11:34:26.3166382Z portName: http
2020-07-10T11:34:26.3166861Z protocol: TCP
2020-07-10T11:34:26.3167284Z env:
2020-07-10T11:34:26.3168046Z APP_NAME: NAME
2020-07-10T11:34:26.3169887Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3175782Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3176587Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3177184Z TZ: Europe/Madrid
2020-07-10T11:34:26.3177683Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3178559Z spring_application_name: NAME
2020-07-10T11:34:26.3179807Z spring_cloud_config_uri: https://URL
2020-07-10T11:34:26.3181055Z envSecrets: {}
2020-07-10T11:34:26.3181569Z fullnameOverride: ""
2020-07-10T11:34:26.3182077Z image:
2020-07-10T11:34:26.3182707Z pullPolicy: Always
2020-07-10T11:34:26.3184026Z repository: REPO
2020-07-10T11:34:26.3185001Z imagePullSecrets: []
2020-07-10T11:34:26.3185461Z ingress:
2020-07-10T11:34:26.3186215Z enabled: ***
2020-07-10T11:34:26.3186709Z livenessProbe: {}
2020-07-10T11:34:26.3187187Z nameOverride: ""
2020-07-10T11:34:26.3188416Z nameSpace: sda-NAME
2020-07-10T11:34:26.3189008Z nodeSelector: {}
2020-07-10T11:34:26.3189522Z podSecurityContext: {}
2020-07-10T11:34:26.3190056Z readinessProbe: {}
2020-07-10T11:34:26.3190552Z replicaCount: 1
2020-07-10T11:34:26.3191030Z resources:
2020-07-10T11:34:26.3191686Z limits:
2020-07-10T11:34:26.3192320Z cpu: 150m
2020-07-10T11:34:26.3192819Z memory: 1444Mi
2020-07-10T11:34:26.3193319Z requests:
2020-07-10T11:34:26.3193797Z cpu: 100m
2020-07-10T11:34:26.3194463Z memory: 1024Mi
2020-07-10T11:34:26.3194975Z route:
2020-07-10T11:34:26.3195470Z alternateBackends: []
2020-07-10T11:34:26.3196028Z enabled: true
2020-07-10T11:34:26.3196556Z fullnameOverride: ""
2020-07-10T11:34:26.3197601Z host: HOST-NAME
2020-07-10T11:34:26.3198314Z nameOverride: ""
2020-07-10T11:34:26.3198828Z path: ""
2020-07-10T11:34:26.3199285Z service:
2020-07-10T11:34:26.3200023Z name: NAME
2020-07-10T11:34:26.3233791Z targetPort: http
2020-07-10T11:34:26.3234697Z weight: 100
2020-07-10T11:34:26.3235283Z status: ""
2020-07-10T11:34:26.3235819Z tls:
2020-07-10T11:34:26.3236787Z enabled: true
2020-07-10T11:34:26.3237479Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3238168Z termination: edge
2020-07-10T11:34:26.3238800Z wildcardPolicy: None
2020-07-10T11:34:26.3239421Z secret:
2020-07-10T11:34:26.3240502Z jks: NAME-servers-jks
2020-07-10T11:34:26.3241249Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3241901Z securityContext: {}
2020-07-10T11:34:26.3242534Z service:
2020-07-10T11:34:26.3243157Z containerPort: 8080
2020-07-10T11:34:26.3243770Z port: 8080
2020-07-10T11:34:26.3244543Z portName: http
2020-07-10T11:34:26.3245190Z targetPort: 8080
2020-07-10T11:34:26.3245772Z type: ClusterIP
2020-07-10T11:34:26.3246343Z serviceAccount:
2020-07-10T11:34:26.3247308Z create: ***
2020-07-10T11:34:26.3247993Z tolerations: []
2020-07-10T11:34:26.3248511Z
2020-07-10T11:34:26.3249065Z HOOKS:
2020-07-10T11:34:26.3249600Z MANIFEST:
2020-07-10T11:34:26.3250504Z ---
2020-07-10T11:34:26.3252176Z # Source: NAME/templates/service.yaml
2020-07-10T11:34:26.3253107Z apiVersion: v1
2020-07-10T11:34:26.3253715Z kind: Service
2020-07-10T11:34:26.3254487Z metadata:
2020-07-10T11:34:26.3255338Z name: NAME
2020-07-10T11:34:26.3256318Z namespace: sda-NAME
2020-07-10T11:34:26.3256883Z labels:
2020-07-10T11:34:26.3257666Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3258533Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3259785Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3260503Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3261383Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3261955Z spec:
2020-07-10T11:34:26.3262427Z type: ClusterIP
2020-07-10T11:34:26.3263292Z ports:
2020-07-10T11:34:26.3264086Z - port: 8080
2020-07-10T11:34:26.3264659Z targetPort: 8080
2020-07-10T11:34:26.3265359Z protocol: TCP
2020-07-10T11:34:26.3265900Z name: http
2020-07-10T11:34:26.3266361Z selector:
2020-07-10T11:34:26.3267220Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3268298Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3269380Z ---
2020-07-10T11:34:26.3270539Z # Source: NAME/templates/deployment.yaml
2020-07-10T11:34:26.3271606Z apiVersion: apps/v1
2020-07-10T11:34:26.3272400Z kind: Deployment
2020-07-10T11:34:26.3273326Z metadata:
2020-07-10T11:34:26.3274457Z name: NAME
2020-07-10T11:34:26.3275511Z namespace: sda-NAME
2020-07-10T11:34:26.3276177Z labels:
2020-07-10T11:34:26.3277219Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3278322Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3279447Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3280249Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3281398Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3282289Z spec:
2020-07-10T11:34:26.3282881Z replicas: 1
2020-07-10T11:34:26.3283505Z selector:
2020-07-10T11:34:26.3284469Z matchLabels:
2020-07-10T11:34:26.3285628Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3286815Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3287549Z template:
2020-07-10T11:34:26.3288192Z metadata:
2020-07-10T11:34:26.3288826Z labels:
2020-07-10T11:34:26.3289909Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3291596Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3292439Z spec:
2020-07-10T11:34:26.3293109Z serviceAccountName: default
2020-07-10T11:34:26.3293774Z securityContext:
2020-07-10T11:34:26.3294666Z {}
2020-07-10T11:34:26.3295217Z containers:
2020-07-10T11:34:26.3296338Z - name: NAME
2020-07-10T11:34:26.3297240Z securityContext:
2020-07-10T11:34:26.3297859Z {}
2020-07-10T11:34:26.3299353Z image: "REGISTRY-IMAGE"
2020-07-10T11:34:26.3300638Z imagePullPolicy: Always
2020-07-10T11:34:26.3301358Z ports:
2020-07-10T11:34:26.3302491Z - name:
2020-07-10T11:34:26.3303380Z containerPort: 8080
2020-07-10T11:34:26.3304479Z protocol: TCP
2020-07-10T11:34:26.3305325Z env:
2020-07-10T11:34:26.3306418Z - name: APP_NAME
2020-07-10T11:34:26.3307576Z value: "NAME"
2020-07-10T11:34:26.3308757Z - name: JAVA_OPTS_EXT
2020-07-10T11:34:26.3311974Z value: "-Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit"
2020-07-10T11:34:26.3313760Z - name: SPRING_CLOUD_CONFIG_PROFILE
2020-07-10T11:34:26.3314842Z value: "pro"
2020-07-10T11:34:26.3315890Z - name: TZ
2020-07-10T11:34:26.3316777Z value: "Europe/Madrid"
2020-07-10T11:34:26.3317863Z - name: WILY_MOM_PORT
2020-07-10T11:34:26.3318485Z value: "5001"
2020-07-10T11:34:26.3319421Z - name: spring_application_name
2020-07-10T11:34:26.3320679Z value: "NAME"
2020-07-10T11:34:26.3321858Z - name: spring_cloud_config_uri
2020-07-10T11:34:26.3323093Z value: "https://config.sda-NAME-pro.svc.cluster.local"
2020-07-10T11:34:26.3324190Z resources:
2020-07-10T11:34:26.3324905Z limits:
2020-07-10T11:34:26.3325439Z cpu: 150m
2020-07-10T11:34:26.3325985Z memory: 1444Mi
2020-07-10T11:34:26.3326739Z requests:
2020-07-10T11:34:26.3327305Z cpu: 100m
2020-07-10T11:34:26.3327875Z memory: 1024Mi
2020-07-10T11:34:26.3328436Z volumeMounts:
2020-07-10T11:34:26.3329476Z - name: jks
2020-07-10T11:34:26.3330147Z mountPath: "/etc/jks"
2020-07-10T11:34:26.3331153Z readOnly: true
2020-07-10T11:34:26.3332053Z - name: jssecacerts
2020-07-10T11:34:26.3332739Z mountPath: "/etc/truststore"
2020-07-10T11:34:26.3333356Z readOnly: true
2020-07-10T11:34:26.3334402Z - name: secretvol
2020-07-10T11:34:26.3335565Z mountPath: "/etc/secret-vol"
2020-07-10T11:34:26.3336302Z readOnly: true
2020-07-10T11:34:26.3336935Z volumes:
2020-07-10T11:34:26.3338100Z - name: jks
2020-07-10T11:34:26.3338724Z secret:
2020-07-10T11:34:26.3339946Z secretName: NAME-servers-jks
2020-07-10T11:34:26.3340817Z - name: secretvol
2020-07-10T11:34:26.3341347Z secret:
2020-07-10T11:34:26.3341870Z secretName:
2020-07-10T11:34:26.3342633Z - name: jssecacerts
2020-07-10T11:34:26.3343444Z secret:
2020-07-10T11:34:26.3344103Z secretName: jssecacerts
2020-07-10T11:34:26.3344866Z ---
2020-07-10T11:34:26.3345846Z # Source: NAME/templates/route.yaml
2020-07-10T11:34:26.3346641Z apiVersion: route.openshift.io/v1
2020-07-10T11:34:26.3347112Z kind: Route
2020-07-10T11:34:26.3347568Z metadata:
2020-07-10T11:34:26.3354831Z name: NAME
2020-07-10T11:34:26.3357144Z labels:
2020-07-10T11:34:26.3358020Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3359360Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3360306Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3361002Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3361888Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3362463Z spec:
2020-07-10T11:34:26.3363374Z host: HOST
2020-07-10T11:34:26.3364364Z path:
2020-07-10T11:34:26.3364940Z wildcardPolicy: None
2020-07-10T11:34:26.3365630Z port:
2020-07-10T11:34:26.3366080Z targetPort: http
2020-07-10T11:34:26.3366496Z tls:
2020-07-10T11:34:26.3367144Z termination: edge
2020-07-10T11:34:26.3367630Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3368072Z to:
2020-07-10T11:34:26.3368572Z kind: Service
2020-07-10T11:34:26.3369571Z name: NAME
2020-07-10T11:34:26.3369919Z weight: 100
2020-07-10T11:34:26.3370115Z status:
2020-07-10T11:34:26.3370287Z ingress: []
2020-07-10T11:34:26.3370419Z
2020-07-10T11:34:26.3370579Z NOTES:
2020-07-10T11:34:26.3370833Z 1. Get the application URL by running these commands:
2020-07-10T11:34:26.3371698Z export POD_NAME=$(kubectl get pods --namespace sda-NAME -l "app.kubernetes.io/name=NAME,app.kubernetes.io/instance=NAME" -o jsonpath="{.items[0].metadata.name}")
2020-07-10T11:34:26.3372278Z echo "Visit http://127.0.0.1:8080 to use your application"
2020-07-10T11:34:26.3373358Z kubectl --namespace sda-NAME port-forward $POD_NAME 8080:80
2020-07-10T11:34:26.3373586Z
2020-07-10T11:34:26.3385047Z ##[section]Finishing: Helm Install/Upgrade NAME
looks well and don`t show any error...but if make it without --dry-run crash in the same part...
On the other hand, I try it without this volume and secret...and works perfect! I don't understand it.
Thank you for your patience and guidance.
UPDATE & FIX:
finally, the problem was in the file values-NAME.yml:
secret:
jks: VALUE
jssecacerts: VALUE
it need the following line in secret:
secretvol: VALUE