Fluentd Not able to scrape high volume logs - grafana

I am trying to Use fluentd to push logs to loki using fluentd-loki plugin. I am not able to make fluentd ingest logs in realtime if the logs exceed 24000 lines/sec. I need help in configuration of fluentd to make it scrape logs fastly and in realtime.
my fluentd_deamonset.yaml
This is the deamonset code
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: loki-fluentd
labels:
app: fluentd
version: v1
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
app: fluentd
version: v1
kubernetes.io/cluster-service: "true"
template:
metadata:
labels:
app: fluentd
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-forward-1
command:
- /bin/sh
- '-c'
- >
fluent-gem i fluent-plugin-grafana-loki-licence-fix ;
fluent-gem i fluent-plugin-parser-cri --no-document ;
tini /fluentd/entrypoint.sh;
resources:
limits:
memory: 1024Mi
requests:
cpu: 1000m
memory: 1024Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: config
mountPath: /fluentd/etc
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config
configMap:
name: fluentd-config
This is config file
fluentd-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: loki-fluentd
labels:
app: fluentd
data:
fluent.conf: |
<source>
#type tail
#id in_tail_container_logs
path /var/log/containers/loggen-*.log
# exclude_path ["/var/log/containers/fluentd*"]
pos_file /tmp/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
#type cri
time_format %Y-%m-%dT%H:%M:%S.%L%z
</parse>
</source>
<match fluentd.**>
#type null
</match>
<match kubernetes.var.log.containers.**fluentd**.log>
#type null
</match>
<filter kubernetes.**>
#type kubernetes_metadata
#id filter_kube_metadata
</filter>
<filter kubernetes.var.log.containers.**>
#type record_transformer
enable_ruby
remove_keys kubernetes, docker
<record>
app ${ record.dig("kubernetes", "labels", "app") }
job ${ record.dig("kubernetes", "labels", "app") }
namespace ${ record.dig("kubernetes", "namespace_name") }
pod ${ record.dig("kubernetes", "pod_name") }
container ${ record.dig("kubernetes", "container_name") }
filename ${ record.dig("kubernetes", "filename")}
workers ${ record.dig("kubernetes", "worker") }
</record>
</filter>
<match kubernetes.var.log.containers.**>
#type copy
<label>
fluentd_worker
</label>
<store>
#type loki
url "http://loki-url"
extra_labels {"env":"dev"}
label_keys "app,job,namespace,pod,container,filename,fluentd_worker,workers"
<buffer>
flush_thread_count "#{ENV['FLUENT_LOKI_BUFFER_FLUSH_THREAD_COUNT'] || '8'}"
# flush_interval "#{ENV['FLUENT_LOKI_BUFFER_FLUSH_INTERVAL'] || '1s'}"
flush_mode "#{ENV['FLUENT_LOKI_BUFFER_FLUSH_INTERVAL'] || 'immediate'}"
chunk_limit_size "#{ENV['FLUENT_LOKI_BUFFER_CHUNK_LIMIT_SIZE'] || '512k'}"
queue_limit_length "#{ENV['FLUENT_LOKI_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}"
retry_max_interval "#{ENV['FLUENT_LOKI_BUFFER_RETRY_MAX_INTERVAL'] || '30'}"
retry_forever true
</buffer>
</store>
<store>
#type stdout
</store>
</match>
I want to know whether configuration can help me ingest more than 10000+ lines per second/fluentd node.

Related

kubernetes audit log filtering with fluentd and forwarding to Splunk

After some struggling I got fluentd to forward Openshift audit log files to Splunk. However this resulted in a huge number of events, so I applied a filter to exclude "get" and "watch". I would like to include the get secrets.
My question, how to change the filter to exclude "get" but include "get secret"?
apiVersion: v1
kind: ConfigMap
metadata:
name: splunk-kubernetes-audit
namespace: splunk-logging
labels:
app: splunk-kubernetes-audit
data:
fluent.conf: |-
<system>
log_level info
</system>
#include source.audit.conf
#include output.conf
output.conf: |-
<label #SPLUNK>
# = filters for non-container log files =
# extract sourcetype
<filter tail.file.**>
#type grep
<exclude>
key verb
pattern /watch/
</exclude>
<and>
<exclude>
key verb
pattern /get/
</exclude>
</and>
</filter>
<filter tail.file.**>
#type jq_transformer
jq '.record.sourcetype = (.tag | ltrimstr("tail.file.")) | .record.cluster_name = "opcdev" | .record.splunk_index = "openshift_audit_n" | .record'
</filter>
# = custom filters specified by users =
# = output =
<match **>
#type splunk_hec
protocol https
hec_host "splunk-heavyforwarder.linux.rabobank.nl"
hec_port 8088
hec_token "#{ENV['SPLUNK_HEC_TOKEN']}"
index_key splunk_index
insecure_ssl false
ca_file /fluentd/etc/splunk/hec_ca_file
host "#{ENV['K8S_NODE_NAME']}"
source_key source
sourcetype_key sourcetype
<fields>
# currently CRI does not produce log paths with all the necessary
# metadata to parse out pod, namespace, container_name, container_id.
# this may be resolved in the future by this issue: https://github.com/kubernetes/kubernetes/issues/58638#issuecomment-385126031
pod
namespace
container_name
cluster_name
container_id
</fields>
app_name splunk-kubernetes-audit
app_version 1.4.7
<buffer>
#type memory
chunk_limit_records 100000
chunk_limit_size 10m
flush_interval 10s
flush_thread_count 1
overflow_action block
retry_max_times 5
retry_type exponential_backoff
retry_wait 2
retry_max_interval 300
total_limit_size 600m
</buffer>
<format>
#type "json"
</format>
</match>
</label>
source.audit.conf: |-
# This fluentd conf file contains sources for log files other than container logs.
<source>
#id tail.file.kube-api-audit
#type tail
#label #SPLUNK
tag tail.file.kube-api-audit
path /var/log/kube-apiserver/audit.log
pos_file /var/log/splunk-fluentd-audit-kube-api-audit.pos
read_from_head true
path_key source
<parse>
#type json
</parse>
</source>
<source>
#id tail.file.oauth-api-audit
#type tail
#label #SPLUNK
tag tail.file.oauth-api-audit
path /var/log/oauth-apiserver/audit.log
pos_file /var/log/splunk-fluentd-audit-oauth-api-audit.pos
read_from_head true
path_key source
<parse>
#type json
</parse>
</source>
<source>
#id tail.file.openshift-api-audit
#type tail
#label #SPLUNK
tag tail.file.openshift-api-audit
path /var/log/openshift-apiserver/audit.log
pos_file /var/log/splunk-fluentd-audit-openshift-api-audit.pos
read_from_head true
path_key source
<parse>
#type json
</parse>
</source>
the secrets
apiVersion: v1
kind: Secret
metadata:
labels:
app: splunk-kubernetes-audit
name: splunk-kubernetes-audit
namespace: splunk-logging
type: Opaque
data:
hec_ca_file: {{ base64 encoded CA certificate }}
splunk_hec_token: {{ base64 encoded Get_token_for_index }}
and the daemonset
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
configmap.update: "1"
deprecated.daemonset.template.generation: "34"
generation: 34
labels:
app: splunk-kubernetes-audit
engine: fluentd
name: splunk-kubernetes-audit
namespace: splunk-logging
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: splunk-kubernetes-audit
release: rabo-splunk
template:
metadata:
annotations:
checksum/config: 0574cfe32baa34dcb02d7e3293f7c5ac0379ffb45cf4b7e455eb6975e6102320
configmap.update.trigger: "1"
prometheus.io/port: "24231"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: splunk-kubernetes-audit
release: rabo-splunk
spec:
containers:
- env:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SPLUNK_HEC_TOKEN
valueFrom:
secretKeyRef:
key: splunk_hec_token
name: splunk-kubernetes-audit
- name: SSL_CERT_FILE
value: /fluentd/etc/splunk/hec_ca_file
image: docker.io/splunk/fluentd-hec:1.2.6
imagePullPolicy: Always
name: splunk-fluentd-k8s-audit
ports:
- containerPort: 24231
name: metrics
protocol: TCP
resources:
requests:
cpu: 500m
memory: 600Mi
securityContext:
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log
name: varlog
- mountPath: /var/log/kube-apiserver
name: varlogkube
readOnly: true
- mountPath: /var/log/oauth-apiserver
name: varlogoauth
readOnly: true
- mountPath: /var/log/openshift-apiserver
name: varlogopenshift
readOnly: true
- mountPath: /fluentd/etc
name: conf-configmap
- mountPath: /fluentd/etc/splunk
name: secrets
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: acr-secret
nodeSelector:
node-role.kubernetes.io/master: ''
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: splunk-logging
serviceAccountName: splunk-logging
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/log/kube-apiserver
type: ""
name: varlogkube
- hostPath:
path: /var/log/oauth-apiserver
type: ""
name: varlogoauth
- hostPath:
path: /var/log/openshift-apiserver
type: ""
name: varlogopenshift
- configMap:
defaultMode: 420
name: splunk-kubernetes-audit
name: conf-configmap
- name: secrets
secret:
defaultMode: 420
secretName: splunk-kubernetes-audit
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
Instead of excluding all get, list and watch actions except for secrets I opted to exclude the objects that cause the most events, like namespaces, pods and configmaps. This resulted in the extra filters below. This gives a reduction in Splunk events of about 65%. An Openshift cluster in rest generates without filtering about 12 GB audit logging a day.
# reduce the number of events by removing get, watch and list api calls
<filter tail.file.**>
#type grep
<and>
<exclude>
key verb
pattern /list/
</exclude>
<exclude>
key $.objectRef.resource
pattern /namespaces/
</exclude>
</and>
</filter>
<filter tail.file.**>
#type grep
<and>
<exclude>
key verb
pattern /list/
</exclude>
<exclude>
key $.objectRef.resource
pattern /pods/
</exclude>
</and>
</filter>
<filter tail.file.**>
#type grep
<and>
<exclude>
key verb
pattern /watch/
</exclude>
<exclude>
key $.objectRef.resource
pattern /namespaces/
</exclude>
</and>
</filter>
<filter tail.file.**>
#type grep
<and>
<exclude>
key verb
pattern /watch/
</exclude>
<exclude>
key $.objectRef.resource
pattern /pods/
</exclude>
</and>
</filter>
<filter tail.file.**>
#type grep
<and>
<exclude>
key verb
pattern /watch/
</exclude>
<exclude>
key $.objectRef.resource
pattern /configmaps/
</exclude>
</and>
</filter>
<filter tail.file.**>
#type grep
<and>
<exclude>
key verb
pattern /get/
</exclude>
<exclude>
key $.objectRef.resource
pattern /configmaps/
</exclude>
</and>
</filter>
<filter tail.file.**>
#type grep
<and>
<exclude>
key verb
pattern /get/
</exclude>
<exclude>
key $.objectRef.resource
pattern /namespaces/
</exclude>
</and>
</filter>
<filter tail.file.**>
#type grep
<and>
<exclude>
key verb
pattern /get/
</exclude>
<exclude>
key $.objectRef.resource
pattern /clusterrolebindings/
</exclude>
</and>
</filter>

HELM UPGRADE ISSUE: spec.template.spec.containers[0].volumeMounts[2].name: Not found: "NAME"

I have been trying to create a POD with HELM UPGRADE:
helm upgrade --values=$(System.DefaultWorkingDirectory)/_NAME-deploy-CI/drop/values-NAME.yaml --namespace sda-NAME-pro --install --reset-values --debug --wait NAME .
but running into below error:
2020-07-08T12:51:28.0678161Z upgrade.go:367: [debug] warning: Upgrade "NAME" failed: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
YML part
volumeMounts:
- name: secretvol
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: jks
secret:
secretName: {{ .Values.secret.jks }}
- name: secretvol
secret:
secretName: {{ .Values.secret.secretvol }}
Maybe, the first deploy need another command the first time? how can I specify these value to test it?
TL;DR
The issue you've encountered:
2020-07-08T12:51:28.0899772Z Error: UPGRADE FAILED: failed to create resource: Deployment.apps "NAME" is invalid: [spec.template.spec.volumes[1].secret.secretName: Required value, spec.template.spec.containers[0].volumeMounts[2].name: Not found: "secretvol"]
is connected with the fact that the variable: {{ .Values.secret.secretvol }} is missing.
To fix it you will need to set this value in either:
Helm command that you are using
File that stores your values in the Helm's chart.
A tip!
You can run your Helm command with --debug --dry-run to output generated YAML's. This should show you where the errors could be located.
There is an official documentation about values in Helm. Please take a look here:
Helm.sh: Docs: Chart template guid: Values files
Basing on
I have been trying to create a POD with HELM UPGRADE:
I've made an example basing on your issue and how you can fix it.
Steps:
Create a helm chart with correct values
Edit the values to reproduce the error
Create a helm chart
For the simplicity of the setup I created basic Helm chart.
Below is the structure of files and directories:
❯ tree helm-dir
helm-dir
├── Chart.yaml
├── templates
│ └── pod.yaml
└── values.yaml
1 directory, 3 files
Create Chart.yaml file
Below is the Chart.yaml file:
apiVersion: v2
name: helm-pod
description: A Helm chart for spawning pod with volumeMount
version: 0.1.0
Create a values.yaml file
Below is the simple values.yaml file which will be used by default in the $ helm install command
usedImage: ubuntu
confidentialName: secret-password # name of the secret in Kubernetes
Create a template for a pod
This template is stored in templates directory with a name pod.yaml
Below YAML definition will be a template for spawned pod:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.usedImage }} # value from "values.yaml"
labels:
app: {{ .Values.usedImage }} # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: {{ .Values.usedImage }} # value from "values.yaml"
image: {{ .Values.usedImage }} # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: {{ .Values.confidentialName }} # value from "values.yaml"
With above example you should be able to run $ helm install --name test-pod .
You should get output similar to this:
NAME: test-pod
LAST DEPLOYED: Thu Jul 9 14:47:46 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod
NAME READY STATUS RESTARTS AGE
ubuntu 0/1 ContainerCreating 0 0s
Disclaimer!
The ubuntu pod is in the ContainerCreating state as there is no secret named secret-password in the cluster.
You can get more information about your pods by running:
$ kubectl describe pod POD_NAME
Edit the values to reproduce the error
The error you got as described earlier is most probably connected with the fact that the value: {{ .Values.secret.secretvol }} was missing.
If you were to edit the values.yaml file to:
usedImage: ubuntu
# confidentialName: secret-password # name of the secret in Kubernetes
Notice the added #.
You should get below error when trying to deploy this chart:
Error: release test-pod failed: Pod "ubuntu" is invalid: [spec.volumes[0].secret.secretName: Required value, spec.containers[0].volumeMounts[0].name: Not found: "secretvol"]
I previously mentioned the --debug --dry-run parameters for Helm.
If you run:
$ helm install --name test-pod --debug --dry-run .
You should get the output similar to this (this is only the part):
---
# Source: helm-pod/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: ubuntu # value from "values.yaml"
labels:
app: ubuntu # value from "values.yaml"
spec:
restartPolicy: Never
containers:
- name: ubuntu # value from "values.yaml"
image: ubuntu # value from "values.yaml"
imagePullPolicy: Always
command:
- sleep
- infinity
volumeMounts:
- name: secretvol # same name as in spec.volumes.name
mountPath: "/etc/secret-vol"
readOnly: true
volumes:
- name: secretvol # same name as in spec.containers.volumeMounts.name
secret:
secretName: # value from "values.yaml"
As you can see the value of secretName was missing. That's the reason above error was showing up.
secretName: # value from "values.yaml"
Thank you Dawik, here we have the output:
2020-07-10T11:34:26.3090526Z LAST DEPLOYED: Fri Jul 10 11:34:25 2020
2020-07-10T11:34:26.3091661Z NAMESPACE: sda-NAME
2020-07-10T11:34:26.3092410Z STATUS: pending-upgrade
2020-07-10T11:34:26.3092796Z REVISION: 13
2020-07-10T11:34:26.3093182Z TEST SUITE: None
2020-07-10T11:34:26.3093781Z USER-SUPPLIED VALUES:
2020-07-10T11:34:26.3105880Z affinity: {}
2020-07-10T11:34:26.3106801Z containers:
2020-07-10T11:34:26.3107446Z port: 8080
2020-07-10T11:34:26.3108124Z portName: http
2020-07-10T11:34:26.3108769Z protocol: TCP
2020-07-10T11:34:26.3109440Z env:
2020-07-10T11:34:26.3110613Z APP_NAME: NAME
2020-07-10T11:34:26.3112959Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3115219Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3116160Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3116974Z TZ: Europe/Madrid
2020-07-10T11:34:26.3117647Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3119640Z spring_application_name: NAME
2020-07-10T11:34:26.3121048Z spring_cloud_config_uri: URI
2020-07-10T11:34:26.3122038Z envSecrets: {}
2020-07-10T11:34:26.3122789Z fullnameOverride: ""
2020-07-10T11:34:26.3123489Z image:
2020-07-10T11:34:26.3124470Z pullPolicy: Always
2020-07-10T11:34:26.3125908Z repository: NAME-REPO
2020-07-10T11:34:26.3126955Z imagePullSecrets: []
2020-07-10T11:34:26.3127675Z ingress:
2020-07-10T11:34:26.3128727Z enabled: ***
2020-07-10T11:34:26.3129509Z livenessProbe: {}
2020-07-10T11:34:26.3130143Z nameOverride: ""
2020-07-10T11:34:26.3131148Z nameSpace: sda-NAME
2020-07-10T11:34:26.3131820Z nodeSelector: {}
2020-07-10T11:34:26.3132444Z podSecurityContext: {}
2020-07-10T11:34:26.3133135Z readinessProbe: {}
2020-07-10T11:34:26.3133742Z replicaCount: 1
2020-07-10T11:34:26.3134636Z resources:
2020-07-10T11:34:26.3135362Z limits:
2020-07-10T11:34:26.3135865Z cpu: 150m
2020-07-10T11:34:26.3136404Z memory: 1444Mi
2020-07-10T11:34:26.3137257Z requests:
2020-07-10T11:34:26.3137851Z cpu: 100m
2020-07-10T11:34:26.3138391Z memory: 1024Mi
2020-07-10T11:34:26.3138942Z route:
2020-07-10T11:34:26.3139486Z alternateBackends: []
2020-07-10T11:34:26.3140087Z annotations: null
2020-07-10T11:34:26.3140642Z enabled: true
2020-07-10T11:34:26.3141226Z fullnameOverride: ""
2020-07-10T11:34:26.3142695Z host:HOST-NAME
2020-07-10T11:34:26.3143480Z labels: null
2020-07-10T11:34:26.3144217Z nameOverride: ""
2020-07-10T11:34:26.3145137Z path: ""
2020-07-10T11:34:26.3145637Z service:
2020-07-10T11:34:26.3146439Z name: NAME
2020-07-10T11:34:26.3147049Z targetPort: http
2020-07-10T11:34:26.3147607Z weight: 100
2020-07-10T11:34:26.3148121Z status: ""
2020-07-10T11:34:26.3148623Z tls:
2020-07-10T11:34:26.3149162Z caCertificate: null
2020-07-10T11:34:26.3149820Z certificate: null
2020-07-10T11:34:26.3150467Z destinationCACertificate: null
2020-07-10T11:34:26.3151091Z enabled: true
2020-07-10T11:34:26.3151847Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3152483Z key: null
2020-07-10T11:34:26.3153032Z termination: edge
2020-07-10T11:34:26.3154104Z wildcardPolicy: None
2020-07-10T11:34:26.3155687Z secret:
2020-07-10T11:34:26.3156714Z jks: NAME-jks
2020-07-10T11:34:26.3157408Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3157962Z securityContext: {}
2020-07-10T11:34:26.3158490Z service:
2020-07-10T11:34:26.3159127Z containerPort: 8080
2020-07-10T11:34:26.3159627Z port: 8080
2020-07-10T11:34:26.3160103Z portName: http
2020-07-10T11:34:26.3160759Z targetPort: 8080
2020-07-10T11:34:26.3161219Z type: ClusterIP
2020-07-10T11:34:26.3161694Z serviceAccount:
2020-07-10T11:34:26.3162482Z create: ***
2020-07-10T11:34:26.3162990Z name: null
2020-07-10T11:34:26.3163451Z tolerations: []
2020-07-10T11:34:26.3163836Z
2020-07-10T11:34:26.3164534Z COMPUTED VALUES:
2020-07-10T11:34:26.3165022Z affinity: {}
2020-07-10T11:34:26.3165474Z containers:
2020-07-10T11:34:26.3165931Z port: 8080
2020-07-10T11:34:26.3166382Z portName: http
2020-07-10T11:34:26.3166861Z protocol: TCP
2020-07-10T11:34:26.3167284Z env:
2020-07-10T11:34:26.3168046Z APP_NAME: NAME
2020-07-10T11:34:26.3169887Z JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts
2020-07-10T11:34:26.3175782Z -Djavax.net.ssl.trustStorePassword=changeit
2020-07-10T11:34:26.3176587Z SPRING_CLOUD_CONFIG_PROFILE: pro
2020-07-10T11:34:26.3177184Z TZ: Europe/Madrid
2020-07-10T11:34:26.3177683Z WILY_MOM_PORT: 5001
2020-07-10T11:34:26.3178559Z spring_application_name: NAME
2020-07-10T11:34:26.3179807Z spring_cloud_config_uri: https://URL
2020-07-10T11:34:26.3181055Z envSecrets: {}
2020-07-10T11:34:26.3181569Z fullnameOverride: ""
2020-07-10T11:34:26.3182077Z image:
2020-07-10T11:34:26.3182707Z pullPolicy: Always
2020-07-10T11:34:26.3184026Z repository: REPO
2020-07-10T11:34:26.3185001Z imagePullSecrets: []
2020-07-10T11:34:26.3185461Z ingress:
2020-07-10T11:34:26.3186215Z enabled: ***
2020-07-10T11:34:26.3186709Z livenessProbe: {}
2020-07-10T11:34:26.3187187Z nameOverride: ""
2020-07-10T11:34:26.3188416Z nameSpace: sda-NAME
2020-07-10T11:34:26.3189008Z nodeSelector: {}
2020-07-10T11:34:26.3189522Z podSecurityContext: {}
2020-07-10T11:34:26.3190056Z readinessProbe: {}
2020-07-10T11:34:26.3190552Z replicaCount: 1
2020-07-10T11:34:26.3191030Z resources:
2020-07-10T11:34:26.3191686Z limits:
2020-07-10T11:34:26.3192320Z cpu: 150m
2020-07-10T11:34:26.3192819Z memory: 1444Mi
2020-07-10T11:34:26.3193319Z requests:
2020-07-10T11:34:26.3193797Z cpu: 100m
2020-07-10T11:34:26.3194463Z memory: 1024Mi
2020-07-10T11:34:26.3194975Z route:
2020-07-10T11:34:26.3195470Z alternateBackends: []
2020-07-10T11:34:26.3196028Z enabled: true
2020-07-10T11:34:26.3196556Z fullnameOverride: ""
2020-07-10T11:34:26.3197601Z host: HOST-NAME
2020-07-10T11:34:26.3198314Z nameOverride: ""
2020-07-10T11:34:26.3198828Z path: ""
2020-07-10T11:34:26.3199285Z service:
2020-07-10T11:34:26.3200023Z name: NAME
2020-07-10T11:34:26.3233791Z targetPort: http
2020-07-10T11:34:26.3234697Z weight: 100
2020-07-10T11:34:26.3235283Z status: ""
2020-07-10T11:34:26.3235819Z tls:
2020-07-10T11:34:26.3236787Z enabled: true
2020-07-10T11:34:26.3237479Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3238168Z termination: edge
2020-07-10T11:34:26.3238800Z wildcardPolicy: None
2020-07-10T11:34:26.3239421Z secret:
2020-07-10T11:34:26.3240502Z jks: NAME-servers-jks
2020-07-10T11:34:26.3241249Z jssecacerts: jssecacerts
2020-07-10T11:34:26.3241901Z securityContext: {}
2020-07-10T11:34:26.3242534Z service:
2020-07-10T11:34:26.3243157Z containerPort: 8080
2020-07-10T11:34:26.3243770Z port: 8080
2020-07-10T11:34:26.3244543Z portName: http
2020-07-10T11:34:26.3245190Z targetPort: 8080
2020-07-10T11:34:26.3245772Z type: ClusterIP
2020-07-10T11:34:26.3246343Z serviceAccount:
2020-07-10T11:34:26.3247308Z create: ***
2020-07-10T11:34:26.3247993Z tolerations: []
2020-07-10T11:34:26.3248511Z
2020-07-10T11:34:26.3249065Z HOOKS:
2020-07-10T11:34:26.3249600Z MANIFEST:
2020-07-10T11:34:26.3250504Z ---
2020-07-10T11:34:26.3252176Z # Source: NAME/templates/service.yaml
2020-07-10T11:34:26.3253107Z apiVersion: v1
2020-07-10T11:34:26.3253715Z kind: Service
2020-07-10T11:34:26.3254487Z metadata:
2020-07-10T11:34:26.3255338Z name: NAME
2020-07-10T11:34:26.3256318Z namespace: sda-NAME
2020-07-10T11:34:26.3256883Z labels:
2020-07-10T11:34:26.3257666Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3258533Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3259785Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3260503Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3261383Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3261955Z spec:
2020-07-10T11:34:26.3262427Z type: ClusterIP
2020-07-10T11:34:26.3263292Z ports:
2020-07-10T11:34:26.3264086Z - port: 8080
2020-07-10T11:34:26.3264659Z targetPort: 8080
2020-07-10T11:34:26.3265359Z protocol: TCP
2020-07-10T11:34:26.3265900Z name: http
2020-07-10T11:34:26.3266361Z selector:
2020-07-10T11:34:26.3267220Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3268298Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3269380Z ---
2020-07-10T11:34:26.3270539Z # Source: NAME/templates/deployment.yaml
2020-07-10T11:34:26.3271606Z apiVersion: apps/v1
2020-07-10T11:34:26.3272400Z kind: Deployment
2020-07-10T11:34:26.3273326Z metadata:
2020-07-10T11:34:26.3274457Z name: NAME
2020-07-10T11:34:26.3275511Z namespace: sda-NAME
2020-07-10T11:34:26.3276177Z labels:
2020-07-10T11:34:26.3277219Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3278322Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3279447Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3280249Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3281398Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3282289Z spec:
2020-07-10T11:34:26.3282881Z replicas: 1
2020-07-10T11:34:26.3283505Z selector:
2020-07-10T11:34:26.3284469Z matchLabels:
2020-07-10T11:34:26.3285628Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3286815Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3287549Z template:
2020-07-10T11:34:26.3288192Z metadata:
2020-07-10T11:34:26.3288826Z labels:
2020-07-10T11:34:26.3289909Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3291596Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3292439Z spec:
2020-07-10T11:34:26.3293109Z serviceAccountName: default
2020-07-10T11:34:26.3293774Z securityContext:
2020-07-10T11:34:26.3294666Z {}
2020-07-10T11:34:26.3295217Z containers:
2020-07-10T11:34:26.3296338Z - name: NAME
2020-07-10T11:34:26.3297240Z securityContext:
2020-07-10T11:34:26.3297859Z {}
2020-07-10T11:34:26.3299353Z image: "REGISTRY-IMAGE"
2020-07-10T11:34:26.3300638Z imagePullPolicy: Always
2020-07-10T11:34:26.3301358Z ports:
2020-07-10T11:34:26.3302491Z - name:
2020-07-10T11:34:26.3303380Z containerPort: 8080
2020-07-10T11:34:26.3304479Z protocol: TCP
2020-07-10T11:34:26.3305325Z env:
2020-07-10T11:34:26.3306418Z - name: APP_NAME
2020-07-10T11:34:26.3307576Z value: "NAME"
2020-07-10T11:34:26.3308757Z - name: JAVA_OPTS_EXT
2020-07-10T11:34:26.3311974Z value: "-Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=*** -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit"
2020-07-10T11:34:26.3313760Z - name: SPRING_CLOUD_CONFIG_PROFILE
2020-07-10T11:34:26.3314842Z value: "pro"
2020-07-10T11:34:26.3315890Z - name: TZ
2020-07-10T11:34:26.3316777Z value: "Europe/Madrid"
2020-07-10T11:34:26.3317863Z - name: WILY_MOM_PORT
2020-07-10T11:34:26.3318485Z value: "5001"
2020-07-10T11:34:26.3319421Z - name: spring_application_name
2020-07-10T11:34:26.3320679Z value: "NAME"
2020-07-10T11:34:26.3321858Z - name: spring_cloud_config_uri
2020-07-10T11:34:26.3323093Z value: "https://config.sda-NAME-pro.svc.cluster.local"
2020-07-10T11:34:26.3324190Z resources:
2020-07-10T11:34:26.3324905Z limits:
2020-07-10T11:34:26.3325439Z cpu: 150m
2020-07-10T11:34:26.3325985Z memory: 1444Mi
2020-07-10T11:34:26.3326739Z requests:
2020-07-10T11:34:26.3327305Z cpu: 100m
2020-07-10T11:34:26.3327875Z memory: 1024Mi
2020-07-10T11:34:26.3328436Z volumeMounts:
2020-07-10T11:34:26.3329476Z - name: jks
2020-07-10T11:34:26.3330147Z mountPath: "/etc/jks"
2020-07-10T11:34:26.3331153Z readOnly: true
2020-07-10T11:34:26.3332053Z - name: jssecacerts
2020-07-10T11:34:26.3332739Z mountPath: "/etc/truststore"
2020-07-10T11:34:26.3333356Z readOnly: true
2020-07-10T11:34:26.3334402Z - name: secretvol
2020-07-10T11:34:26.3335565Z mountPath: "/etc/secret-vol"
2020-07-10T11:34:26.3336302Z readOnly: true
2020-07-10T11:34:26.3336935Z volumes:
2020-07-10T11:34:26.3338100Z - name: jks
2020-07-10T11:34:26.3338724Z secret:
2020-07-10T11:34:26.3339946Z secretName: NAME-servers-jks
2020-07-10T11:34:26.3340817Z - name: secretvol
2020-07-10T11:34:26.3341347Z secret:
2020-07-10T11:34:26.3341870Z secretName:
2020-07-10T11:34:26.3342633Z - name: jssecacerts
2020-07-10T11:34:26.3343444Z secret:
2020-07-10T11:34:26.3344103Z secretName: jssecacerts
2020-07-10T11:34:26.3344866Z ---
2020-07-10T11:34:26.3345846Z # Source: NAME/templates/route.yaml
2020-07-10T11:34:26.3346641Z apiVersion: route.openshift.io/v1
2020-07-10T11:34:26.3347112Z kind: Route
2020-07-10T11:34:26.3347568Z metadata:
2020-07-10T11:34:26.3354831Z name: NAME
2020-07-10T11:34:26.3357144Z labels:
2020-07-10T11:34:26.3358020Z helm.sh/chart: NAME-1.0.0
2020-07-10T11:34:26.3359360Z app.kubernetes.io/name: NAME
2020-07-10T11:34:26.3360306Z app.kubernetes.io/instance: NAME
2020-07-10T11:34:26.3361002Z app.kubernetes.io/version: "latest"
2020-07-10T11:34:26.3361888Z app.kubernetes.io/managed-by: Helm
2020-07-10T11:34:26.3362463Z spec:
2020-07-10T11:34:26.3363374Z host: HOST
2020-07-10T11:34:26.3364364Z path:
2020-07-10T11:34:26.3364940Z wildcardPolicy: None
2020-07-10T11:34:26.3365630Z port:
2020-07-10T11:34:26.3366080Z targetPort: http
2020-07-10T11:34:26.3366496Z tls:
2020-07-10T11:34:26.3367144Z termination: edge
2020-07-10T11:34:26.3367630Z insecureEdgeTerminationPolicy: None
2020-07-10T11:34:26.3368072Z to:
2020-07-10T11:34:26.3368572Z kind: Service
2020-07-10T11:34:26.3369571Z name: NAME
2020-07-10T11:34:26.3369919Z weight: 100
2020-07-10T11:34:26.3370115Z status:
2020-07-10T11:34:26.3370287Z ingress: []
2020-07-10T11:34:26.3370419Z
2020-07-10T11:34:26.3370579Z NOTES:
2020-07-10T11:34:26.3370833Z 1. Get the application URL by running these commands:
2020-07-10T11:34:26.3371698Z export POD_NAME=$(kubectl get pods --namespace sda-NAME -l "app.kubernetes.io/name=NAME,app.kubernetes.io/instance=NAME" -o jsonpath="{.items[0].metadata.name}")
2020-07-10T11:34:26.3372278Z echo "Visit http://127.0.0.1:8080 to use your application"
2020-07-10T11:34:26.3373358Z kubectl --namespace sda-NAME port-forward $POD_NAME 8080:80
2020-07-10T11:34:26.3373586Z
2020-07-10T11:34:26.3385047Z ##[section]Finishing: Helm Install/Upgrade NAME
looks well and don`t show any error...but if make it without --dry-run crash in the same part...
On the other hand, I try it without this volume and secret...and works perfect! I don't understand it.
Thank you for your patience and guidance.
UPDATE & FIX:
finally, the problem was in the file values-NAME.yml:
secret:
jks: VALUE
jssecacerts: VALUE
it need the following line in secret:
secretvol: VALUE

Kubernetes: Replace file by configmap

Here my configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: chart-1591249502-zeppelin
namespace: ra-iot-dev
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
data:
log4j.properties: |-
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
log4j.appender.dailyfile.DEBUG = INFO
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyfile.File = ${zeppelin.log.file}
log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.logger.org.apache.zeppelin.python=DEBUG
log4j.logger.org.apache.zeppelin.spark=DEBUG
I'm trying to mount this file into /zeppelin/conf/log4j.properties pod directory file.
Here my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-1591249502-zeppelin
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
template:
metadata:
labels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
spec:
serviceAccountName: chart-1591249502-zeppelin
securityContext:
{}
containers:
- name: zeppelin
securityContext:
{}
image: "apache/zeppelin:0.9.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
env:
- name: ZEPPELIN_PORT
value: "8080"
- name: ZEPPELIN_K8S_CONTAINER_IMAGE
value: apache/zeppelin:0.9.0
- name: ZEPPELIN_RUN_MODE
value: local
volumeMounts:
- name: log4j-properties-volume
mountPath: /zeppelin/conf/log4j.properties
volumes:
- name: log4j-properties-volume
configMap:
name: chart-1591249502-zeppelin
items:
- key: log4j.properties
path: keys
I'm getting this error event in kubernetes:
Error: failed to start container "zeppelin": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:364: container init caused \"rootfs_linux.go:54: mounting \\"/var/lib/origin/openshift.local.volumes/pods/63ac209e-a626-11ea-9e39-0050569f5f65/volumes/kubernetes.io~configmap/log4j-properties-volume\\" to rootfs \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged\\" at \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged/zeppelin/conf/log4j.properties\\" caused \\"not a directory\\"\"" : Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Take in mind, that I only want to replace an existing file. I mean, into /zeppelin/conf/ directory there are several files. I only want to replace /zeppelin/conf/log4j.properties.
Any ideas?
From logs I saw that you are working on OpenShift, however I was able to do it on GKE.
I've deployed pure zeppelin deployment form your example.
zeppelin#chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$ cat log4j.properties
#
# Licensed to the Apache Software Foundation (ASF) under one or more
...
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
...
# limitations under the License.
#
log4j.rootLogger = INFO, stdout
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
zeppelin#chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$
If you want to repleace one specific file, you need to use subPath. There is also article with another example which can be found here.
Issue 1. ConfigMap belongs to namespace
Your deployment did not contains any namespace so it was deployed in default namespace. ConfigMap included namespace: ra-iot-dev.
$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
...
configmaps cm true ConfigMap
...
If you will keep this namespace, you will probably get error like:
MountVolume.SetUp failed for volume "log4j-properties-volume" : configmap "chart-1591249502-zeppelin" not found
Issue 2. subPath to replace file
Ive changed one part in deployment (added subPath)
volumeMounts:
- name: log4j-properties-volume
mountPath: /zeppelin/conf/log4j.properties
subPath: log4j.properties
volumes:
- name: log4j-properties-volume
configMap:
name: chart-1591249502-zeppelin
and another in ConfigMap (removed namespace and set proper names)
apiVersion: v1
kind: ConfigMap
metadata:
name: chart-1591249502-zeppelin
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
data:
log4j.properties: |-
...
After that output of the file looks like:
$ kubectl exec -ti chart-1591249502-zeppelin-64495dcfc8-ccddr -- /bin/bash
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~$ cd conf
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ ls
configuration.xsl log4j.properties log4j_yarn_cluster.properties zeppelin-env.cmd.template zeppelin-site.xml.template
interpreter-list log4j.properties2 shiro.ini.template zeppelin-env.sh.template
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ cat log4j.properties
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
log4j.appender.dailyfile.DEBUG = INFO
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyfile.File = ${zeppelin.log.file}
log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.logger.org.apache.zeppelin.python=DEBUG
log4j.logger.org.apache.zeppelin.spark=DEBUGzeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$
---
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config-test
namespace: ***
labels:
app: test
environment: ***
tier: backend
data:
application.properties: |-
ulff.kafka.configuration.acks=0
ulff.kafka.configuration[bootstrap.servers]=IP
ulff.kafka.topic=test-topic
ulff.enabled=true
logging.level.com.anurag.gigthree.ulff.kafka=DEBUG
management.port=9080
management.security.enabled=false
management.endpoints.web.exposure.include= "metrics,health,threaddump,prometheus,heapdump"
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
## For apigee PROD
apigee.url=****
### Secrets in Kubenetes accessed by ENV variables
apigee.clientID=apigeeClientId
apigee.clientsecret=apigeeClientSecret
spring.mvc.throw-exception-if-no-handler-found=true
#For OAuth details for apigee
oauth2.config.clientId=${apigee.clientID}
oauth2.config.clientSecret=${apigee.clientsecret}
oauth2.config.authenticationScheme=form
oauth2.config.scopes[0]=test_INTEGRATION_ALL
oauth2.config.accessTokenUri=${apigee.url}/oauth2/token
oauth2.config.requestTimeout=55000
oauth2.restTemplateBuilder.enabled=true
#spring jackson properties
spring.jackson.default-property-inclusion=always
spring.jackson.generator.ignore-unknown=true
spring.jackson.mapper.accept-case-insensitive-properties=true
spring.jackson.deserialization.fail-on-unknown-properties=false
# service urls for apply profile
services.apigeeIntegrationAPI.doProfileChangeUrl=${apigee.url}/v1/testintegration
services.apigeeIntegrationAPI.modifyServiceOfSubscriberUrl=${apigee.url}/v1/testintegration/subscribers
# service urls for retrieve profile
services.apigeeIntegrationAPI.getProfileUrl=${apigee.url}/v1
services.apigeeIntegrationAPI.readKeyUrl=${apigee.url}/v1/testintegration
test.acfStatusConfig[1].country-prefix=
test.acfStatusConfig[1].country-code=
test.acfStatusConfig[1].profile-name=
test.acfStatusConfig[1].adult=ON
test.acfStatusConfig[1].hvw=ON
test.acfStatusConfig[1].ms=ON
test.acfStatusConfig[1].dc=ON
test.acfStatusConfig[1].at=OFF
test.acfStatusConfig[1].gambling=
test.acfStatusConfig[1].dating=OFF
test.acfStatusConfig[1].sex=OFF
test.acfStatusConfig[1].sn=OFF
logging.pattern.level=%X{ulff.transaction-id:-} -%5p
logging.config=/app/config/log4j2.yml
log4j2.yml: |-
Configutation:
name: test-ms
packages :
Appenders:
Console:
- name: sysout
target: SYSTEM_OUT
PatternLayout:
pattern: "%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"
- name: syserr
target: SYSTEM_ERR
PatternLayout:
pattern: "%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"
Filters:
ThresholdFilter :
level: "WARN"
onMatch: "ACCEPT"
Kafka:
name : kafkaAppender
topic: af.prod.ms.test.tomcat
JSONLayout:
complete: "false"
compact: "false"
eventEol: "true"
includeStacktrace: "true"
properties: "true"
Property:
name: "bootstrap.servers"
value: ""
Loggers:
Root:
level: INFO
AppenderRef:
- ref: sysout
- ref: syserr
#### test 1 test 2 Separate kafka log from application log
Logger:
- name: com.anurag
level: INFO
AppenderRef:
- ref: kafkaAppender
- name: org.springframework
level: INFO
AppenderRef:
- ref: kafkaAppender

Helm appears to parse my chart differently depending on if I use --dry-run --debug?

So I was deploying a new cronjob today and got the following error:
Error: release acs-export-cronjob failed: CronJob.batch "acs-export-cronjob" is invalid: [spec.jobTemplate.spec.template.spec.containers: Required value, spec.jobTemplate.spec.template.spec.restartPolicy: Unsupported value: "Always": supported values: "OnFailure", "Never"]
here's some output from running helm on the same chart, no changes made, but with the --debug --dry-run flags:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
metadata:
name: acs-export-cronjob
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never #<----------this is not 'Always'!!
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers: #<--------this field is not missing!
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
if you paid attention, you may have noticed line 101 (I added the comment afterwards) in the debug-output, which sets restartPolicy to Never, quite the opposite of Always as the error message claims it to be.
You may also have noticed line 126 (again, I added the comment after the fact) of the debug output, where the mandatory field containers is specified, again, much in contradiction to the error-message.
whats going on here?
hah! found it! it was a simple mistake actually. I had an extra spec:metadata section under jobtemplate which was duplicated. removing one of the dupes fixed my issues.
I really wish the error-messages of helm would be more helpful.
the corrected chart looks like:
NAME: acs-export-cronjob
REVISION: 1
RELEASED: Wed Oct 17 14:12:02 2018
CHART: generic-job-0.1.0
USER-SUPPLIED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
schedule: 0 * * * *
COMPUTED VALUES:
applicationName: users
command: publishAllForRealm
image: <censored>.amazonaws.com/sonic/acs-export:latest
jobAppArgs: ""
jobVmArgs: ""
jobgroup: acs-export-jobs
name: acs-export-cronjob
resources:
cpu: 100m
memory: 1Gi
schedule: 0 * * * *
sonicNodeGroup: api
springProfiles: export-job
HOOKS:
MANIFEST:
---
# Source: generic-job/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: acs-export-cronjob-sa
---
# Source: generic-job/templates/rbac.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-manager
rules:
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get"]
---
# Source: generic-job/templates/rbac.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: acs-export-cronjob-binding
subjects:
- kind: ServiceAccount
name: acs-export-cronjob-sa
roleRef:
kind: Role
name: acs-export-cronjob-manager
apiGroup: rbac.authorization.k8s.io
---
# Source: generic-job/templates/generic-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: acs-export-cronjob
labels:
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
spec:
schedule: 0 * * * *
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
concurrencyPolicy: Forbid
startingDeadlineSeconds: 120
jobTemplate:
spec:
template:
metadata:
labels:
jobgroup: acs-export-jobs
app: generic-job
chart: "generic-job-0.1.0"
release: "acs-export-cronjob"
heritage: "Tiller"
annotations:
iam.amazonaws.com/role: arn:aws:iam::<censored>:role/k8s-service-role
spec:
restartPolicy: Never
serviceAccountName: acs-export-cronjob-sa
tolerations:
- key: sonic-node-group
operator: Equal
value: api
effect: NoSchedule
nodeSelector:
sonic-node-group: api
volumes:
- name: config
emptyDir: {}
initContainers:
- name: "get-users-vmargs-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_SPECIFIC_VM_ARGS')].value}\" > /config/users-vmargs && cat /config/users-vmargs"]
volumeMounts:
- mountPath: /config
name: config
- name: "get-users-yaml-appconfig-from-deployment"
image: <censored>.amazonaws.com/utils/kubectl-helm:latest
command: ["sh", "-c", "kubectl -n eu1-test get deployment users-vertxapp -o jsonpath=\"{..spec.containers[0].env[?(#.name=='APP_YAML_CONFIG')].value}\" > /config/users-appconfig && cat /config/users-appconfig"]
volumeMounts:
- mountPath: /config
name: config
containers:
- image: <censored>.amazonaws.com/sonic/acs-export:latest
imagePullPolicy: Always
name: "users-batch"
command:
- "bash"
- "-c"
- 'APP_SPECIFIC_VM_ARGS="$(cat /config/users-vmargs) " APP_YAML_CONFIG="$(cat /config/users-appconfig)" /vertx-app/startvertx.sh'
env:
- name: FRENV
value: "batch"
- name: STACKNAME
value: eu1-test
- name: SPRING_PROFILES
value: "export-job"
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- mountPath: /config
name: config
resources:
limit:
cpu: 100m
memory: 1Gi
This may be due to formatting error.
Look at the examples here and here.
The structure is
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
As per provided output you have spec and restartPolicy on the same line:
jobTemplate:
spec:
template:
spec:
restartPolicy: Never #<----------this is not 'Always'!!
The same with spec.jobTemplate.spec.template.spec.containers
Suppose helm uses some default values instead of yours.
You can also try to generate yaml file, convert it to json and apply.

RabbitMQ nodes not able to discover each other and join cluster

I'm new to RabbitMQ and trying to setup a Highly Available Queue using statefulsets. The tutorial I followed is here
After deploying the statefulset and service to kubernetes,
The nodes are not able to discover each other in the cluster and the pod goes to Status: CrashLoopBackOff. It seems the Peer Discovery is not working as expected and the node is not able to join the cluster.
My cluster nodes are
rabbit#rabbitmq-0, rabbit#rabbitmq-1 and rabbit#rabbitmq-2
$ kubectl exec -it rabbitmq-0 /bin/sh
/ # rabbitmqctl status
Status of node 'rabbit#rabbitmq-0'
Error: unable to connect to node 'rabbit#rabbitmq-0': nodedown
DIAGNOSTICS
===========
attempted to contact: ['rabbit#rabbitmq-0']
rabbit#rabbitmq-0:
* connected to epmd (port 4369) on rabbitmq-0
* epmd reports: node 'rabbit' not running at all
no other nodes on rabbitmq-0
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-22#rabbitmq-0'
- home dir: /var/lib/rabbitmq
- cookie hash: 5X3n5Gy+r4FL+M53FHwv3w==
rabbitmq.conf
{ rabbit, [
{ loopback_users, [ ] },
{ tcp_listeners, [ 5672 ] },
{ ssl_listeners, [ ] },
{ hipe_compile, false },
{ cluster_nodes, { [ rabbit#rabbitmq-0, rabbit#rabbitmq-1, rabbit#rabbitmq-2], disc } },
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"/etc/rabbitmq/ca_certificate.pem"},
{certfile,"/etc/rabbitmq/server_certificate.pem"},
{keyfile,"/etc/rabbitmq/server_key.pem"},
{verify,verify_peer},
{versions, ['tlsv1.2', 'tlsv1.1']}
{fail_if_no_peer_cert,false}]}
] },
{ rabbitmq_management, [ { listener, [
{ port, 15672 },
{ ssl, false }
] } ] }
].
$ kubectl get statefulset rabbitmq
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: rabbitmq
name: rabbitmq
namespace: development
resourceVersion: "119265565"
selfLink: /apis/apps/v1/namespaces/development/statefulsets/rabbitmq
uid: 10c2fabc-cbb3-11e7-8821-00505695519e
spec:
podManagementPolicy: OrderedReady
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: rabbitmq
serviceName: rabbitmq
template:
metadata:
creationTimestamp: null
labels:
app: rabbitmq
spec:
containers:
- env:
- name: RABBITMQ_ERLANG_COOKIE
valueFrom:
secretKeyRef:
key: rabbitmq-erlang-cookie
name: rabbitmq-erlang-cookie
image: rabbitmq:1.0
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- |
if [ -z "$(grep rabbitmq /etc/resolv.conf)" ]; then
sed "s/^search \([^ ]\+\)/search rabbitmq.\1 \1/" /etc/resolv.conf > /etc/resolv.conf.new;
cat /etc/resolv.conf.new > /etc/resolv.conf;
rm /etc/resolv.conf.new;
fi; until rabbitmqctl node_health_check; do sleep 1; done; if [[ "$HOSTNAME" != "rabbitmq-0" && -z "$(rabbitmqctl cluster_status | grep rabbitmq-0)" ]]; then
rabbitmqctl stop_app;
rabbitmqctl join_cluster rabbit#rabbitmq-0;
rabbitmqctl start_app;
fi; rabbitmqctl set_policy ha-all "." '{"ha-mode":"exactly","ha-params":3,"ha-sync-mode":"automatic"}'
name: rabbitmq
ports:
- containerPort: 5672
protocol: TCP
- containerPort: 5671
protocol: TCP
- containerPort: 15672
protocol: TCP
- containerPort: 25672
protocol: TCP
- containerPort: 4369
protocol: TCP
resources:
limits:
cpu: 400m
memory: 2Gi
requests:
cpu: 200m
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/rabbitmq
name: rabbitmq-persistent-data-storage
- mountPath: /etc/rabbitmq
name: rabbitmq-config
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 10
volumes:
- name: rabbitmq-config
secret:
defaultMode: 420
secretName: rabbitmq-config
updateStrategy:
type: OnDelete
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: rabbitmq-persistent-data-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
status:
phase: Pending
status:
currentReplicas: 1
currentRevision: rabbitmq-4234207235
observedGeneration: 1
replicas: 1
updateRevision: rabbitmq-4234207235
$ kubectl get service rabbitmq
apiVersion: v1
kind: Service
metadata:
labels:
app: rabbitmq
name: rabbitmq
namespace: develop
resourceVersion: "59968950"
selfLink: /api/v1/namespaces/develop/services/rabbitmq
uid: ced85a60-cbae-11e7-8821-00505695519e
spec:
clusterIP: None
ports:
- name: tls-amqp
port: 5671
protocol: TCP
targetPort: 5671
- name: management
port: 15672
protocol: TCP
targetPort: 15672
selector:
app: rabbitmq
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
$ kubectl describe pod rabbitmq-0
Name: rabbitmq-0
Namespace: development
Node: node9/170.XX.X.Xx
Labels: app=rabbitmq
controller-revision-hash=rabbitmq-4234207235
Status: Running
IP: 10.25.128.XX
Controlled By: StatefulSet/rabbitmq
Containers:
rabbitmq:
Container ID: docker://f60b06283d3974382a068ded54782b24de4b6da3203c05772a77c65d76aa2e2f
Image: rabbitmq:1.0
Image ID: rabbitmq#sha256:6245a81a1fc0fb
Ports: 5672/TCP, 5671/TCP, 15672/TCP, 25672/TCP, 4369/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Ready: False
Restart Count: 104
Limits:
cpu: 400m
memory: 2Gi
Requests:
cpu: 200m
memory: 1Gi
Environment:
RABBITMQ_ERLANG_COOKIE: <set to the key 'rabbitmq-erlang-cookie' in secret 'rabbitmq-erlang-cookie'> Optional: false
Mounts:
/etc/rabbitmq from rabbitmq-config (rw)
/var/lib/rabbitmq from rabbitmq-persistent-data-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lqbp6 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
rabbitmq-persistent-data-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: rabbitmq-persistent-data-storage-rabbitmq-0
ReadOnly: false
rabbitmq-config:
Type: Secret (a volume populated by a Secret)
SecretName: rabbitmq-config
Optional: false
default-token-lqbp6:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lqbp6
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
This problem is due to failed DNS resolution happening inside the Pod. The pods are not able to contact each other due to no valid DNS records.
In order to solve this, please try creating additional service, or edit an existing one to handle DNS resolution for this.
Creating an additional service for DNS probe, can be done as follows :
kind: Service
apiVersion: v1
metadata:
namespace: default
name: rabbitmq
labels:
app: rabbitmq
type: Service
spec:
ports:
- name: http
protocol: TCP
port: 15672
targetPort: 15672
- name: amqp
protocol: TCP
port: 5672
targetPort: 5672
selector:
app: rabbitmq
type: ClusterIP
clusterIP: None
Here you mention in the Service spec that it is of type ClusterIP with clusterIP as none. This should help pods resolve the DNS.
Cheers!!
Rishabh