Service Account created via helm chart doesn't work with REST API - grafana

I am creating/configuring a service account(SA) in the helm chart.
It is created(in the k8s namespace as a secret), however, when I try to use its token in a HTTP/REST API call e.g. get folders, it says:
"invalid API key"
The idea is whenever Grafana is installed from scratch, an SA should be provisioned. This SA token will be then used for accessing the REST API.
Chart.yaml
apiVersion: v2
name: kraken-observability-stack
version: 0.1.0
#We don't have a built-in-house app so we dont set
#appVersion: 0.1.0
kubeVersion: "^1.20.0-0"
description: The kraken observability stack for collecting and visualizing metrics, logs and traces related to CI pipelines.
home: https://docs.net/
dependencies:
- name: grafana
repository: https://grafana.github.io/helm-charts
version: 6.50.x
- name: mimir-distributed
repository: https://grafana.github.io/helm-charts
version: 3.2.x
- name: loki-distributed
repository: https://grafana.github.io/helm-charts
version: 0.68.x
- name: tempo-distributed
repository: https://grafana.github.io/helm-charts
version: 1.0.x
- name: opentelemetry-collector
repository: https://open-telemetry.github.io/opentelemetry-helm-charts
version: 0.47.x
(partial) values.yaml
grafana:
testFramework:
enabled: false
resources:
limits:
#maybe we shouldn't set cpu limits to avoid overbooking of resources.
#cpu: 1000m
memory: 1Gi
requests:
memory: 200Mi
cpu: 200m
grafana.ini:
force_migration: true
data_proxy:
timeout: 60s
#feature_toggles:
# enable: tempoServiceGraph,tempoSearch,tempoBackendSearch,tempoApmTable
auth:
login_cookie_name: "kraken_grafana_session"
auth.anonymous:
enabled: true
org_name: 'CICDS Pipelines User'
org_role: 'Viewer'
analytics:
reporting_enabled: false
check_for_updates: false
check_for_plugin_updates: false
enable_feedback_links: false
log:
level: warn
mode: console
plugins:
enable_alpha: true
app_tls_skip_verify_insecure: true
allow_loading_unsigned_plugins: true
#podAnnotations for grafana to expose its own metrics
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/schema: "http"
prometheus.io/port: "http"
prometheus.io/path: "/metrics"
rbac:
#disable Create and use RBAC resources
create: false
#disable Create PodSecurityPolicy (we don't have privileges for that)
pspEnabled: false
#disable to enforce AppArmor in created PodSecurityPolicy
pspUseAppArmor: false
serviceAccount:
create: true
name: grafana-init-sa
labels: {kraken-init}
replicas: 3
image:
#repository: docker-virtual.repository.net/grafana/grafana
repository: grafana/grafana
downloadDashboardsImage:
repository: docker-virtual.repository.net/curlimages/curl
tag: 7.85.0
pullPolicy: IfNotPresent
persistence:
type: statefulset
enabled: true
initChownData:
## This allows the prometheus-server to be run with an arbitrary user
##
enabled: false
#image:
# repository: docker-virtual.repository.net/busybox
# Administrator credentials when not using an existing secret (see below)
adminUser: admin
adminPassword: changeit
# Use an existing secret for the admin user.
# grafana-admin-credentials name is reserved by the operator and thus -creds
admin:
existingSecret: "grafana-admin-user"
userKey: ADMIN_USER
passwordKey: ADMIN_PASSWORD
env:
HTTP_PROXY: http://p985nst:p985nst#proxyvip-se.sbcore.net:8080/
HTTPS_PROXY: http://p985nst:p985nst#proxyvip-se.sbcore.net:8080/
NO_PROXY: .cluster.local,.net,.sbcore.net,.svc,10.0.0.0/8,172.30.0.0/16,localhost
# ## Pass the plugins you want installed as a list.
# ##
# plugins:
# - digrich-bubblechart-panel
# - grafana-clock-panel
# - grafana-piechart-panel
# - natel-discrete-panel
extraSecretMounts:
- name: loki-credentials-secret-mount
secretName: loki-credentials
defaultMode: 0440
mountPath: /etc/secrets/.loki_credentials
readOnly: true

Related

I created a serviceemointer using jsonexporter in Prometheus environment, but the metrics could not be verified. Is there a way to check the metric?

I am a beginner who is using Prometheus and Grapana to monitor the value of REST API.
Prometheus, json-exporrter, and grafana both used the Helm chart, Prometheus installed as default values.yaml, and json-exporter installed as custom values.yaml.
I checked that the prometheus set the service monitor of json-exporter as a target, but I couldn't check its metrics.
How can I check the metrics? Below is the environment , screenshots and code.
environment :
kubernetes : v1.22.9
helm : v3.9.2
prometheus-json-exporter helm chart : v0.5.0
kube-prometheus-stack helm chart : 0.58.0
screenshots :
https://drive.google.com/drive/folders/1vfjbidNpE2_yXfxdX8oX5eWh4-wAx7Ql?usp=sharing
values.yaml
in custom_jsonexporter_values.yaml
# Default values for prometheus-json-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: quay.io/prometheuscommunity/json-exporter
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: []
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: []
podSecurityContext: {}
# fsGroup: 2000
# podLabels:
# Custom labels for the pod
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 7979
targetPort: http
name: http
serviceMonitor:
## If true, a ServiceMonitor CRD is created for a prometheus operator
## https://github.com/coreos/prometheus-operator
##
enabled: true
namespace: monitoring
scheme: http
# Default values that will be used for all ServiceMonitors created by `targets`
defaults:
additionalMetricsRelabels: {}
interval: 60s
labels:
release: prometheus
scrapeTimeout: 60s
targets:
- name : pi2
url: http://xxx.xxx.xxx.xxx:xxxx
labels: {} # Map of labels for ServiceMonitor. Overrides value set in `defaults`
interval: 60s # Scraping interval. Overrides value set in `defaults`
scrapeTimeout: 60s # Scrape timeout. Overrides value set in `defaults`
additionalMetricsRelabels: {} # Map of metric labels and values to add
ingress:
enabled: false
className: ""
annotations: []
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: []
tolerations: []
affinity: []
configuration:
config: |
---
modules:
default:
metrics:
- name: used_storage_byte
path: '{ .used }'
help: used storage byte
values:
used : '{ .used }'
labels: {}
- name: free_storage_byte
path: '{ .free }'
help: free storage byte
labels: {}
values :
free : '{ .free }'
- name: total_storage_byte
path: '{ .total }'
help: total storage byte
labels: {}
values :
total : '{ .total }'
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
rules: []
additionalVolumes: []
# - name: password-file
# secret:
# secretName: secret-name
additionalVolumeMounts: []
# - name: password-file
# mountPath: "/tmp/mysecret.txt"
# subPath: mysecret.txt
Firstly you can check the targets page on the Prometheus UI to see if a) your desired target is even defined and b) if the endpoint is reachable and being scraped.
However, you may need to troubleshoot a little if either of the above is not the case:
It is important to understand what is happening. You have deployed a Prometheus Operator to the cluster. If you have used the default values from the helm chart, you also deployed a Prometheus custom resource(CR). This instance is what is telling the Prometheus Operator how to ultimately configure the Prometheus running inside the pod. Certain things are static, like global metric relabeling for example, but most are dynamic, such as picking up new targets to actually scrape. Inside the Prometheus CR you will find options to specify serviceMonitorSelector and serviceMonitorNamespaceSelector (The behaviour is the same also for probes and podmonitors so I'm just going over it once). Assuming you leave the default set like serviceMonitorNamespaceSelector: {}, Prometheus Operator will look for ServiceMonitors in all namespaces on the cluster to which it has access via its serviceAccount. The serviceMonitorSelector field lets you specify a label and value combination that must be present on a serviceMonitor that must be present for it to be picked up. Once a or multiple serviceMonitors are found, that match the criteria in the selectors, Prometheus Operator adjusts the configuration in the actual Prometheus instance(tl;dr version) so you end up with proper scrape targets.
Step 1 for trouble shooting: Do your selectors match the labels and namespace of the serviceMonitors? Actually check those. The default on the prometheus operator helm chart expects a label release: prometheus-operator and in your config, you don't seem to add that to your json-exporter's serviceMonitor.
Step 2: The same behaviour as outline for how serviceMonitors are picked up, is happening in turn inside the serviceMonitor itself, so make sure that your service actually matches what is specced out in the serviceMonitor.
To deep dive further into the options you have and what the fields do, check the API documentation.

Installing nextcloud (helm) on linode k8s (v1.19) cluster with nginx-ingress tls/ssl encryption (let's encrypt) dosen't work as expected

I tried to install nextcloud on a Linode k8s managed cluster with helm. The installation process was no problem. Everything worked fine. My nextcloud instance is tls encrypted, but when I try to log in, nothing happens. In the console of the browser I only get the information:
Refused to send form data to 'http://cloud.my-domain.io/' because it violates the following Content Security Policy directive: "form-action 'self'".
I figured out, that this has probably to do with the fact that nextcloud has a problem with the reverse proxy of my k8s Nginx ingress. I tried out to solve the problem by adding 'overwriteprotocol' => 'https' to my config.php. Then the error message disappears, but I didn't get forward to the account page, I still see only the login page.
I also tried out to use the Nginx ingress of the nextcloud helm chart itself. But that doesn't work at all. When I try to access nextcloud within the browser I only get the information, that the instance is unsafe. No login page, no nextcloud page at all. Just the blank browser page.
In the end, I tried to use nextcloud without any tls, and that works quite fine. But of course, that's not what I want. I wanna have a secure connection to nextcloud.
I have no ideas what else I can do. What I'm doing wrong? I really hope that someone can help me to solve the problem. Thank's for your help!
That's my helm values.yaml:
## Official nextcloud image version
## ref: https://hub.docker.com/r/library/nextcloud/tags/
##
image:
repository: nextcloud
tag: 19.0.3-apache
pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
nameOverride: ""
fullnameOverride: ""
# Number of replicas to be deployed
replicaCount: 1
## Allowing use of ingress controllers
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
enabled: false
# metadata:
# annotations:
# kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/rewrite-target: /
# nginx.ingress.kubernetes.io/proxy-body-size: 4G
# kubernetes.io/tls-acme: "true"
# certmanager.k8s.io/cluster-issuer: letsencrypt-prod
# nginx.ingress.kubernetes.io/server-snippet: |-
# server_tokens off;
# proxy_hide_header X-Powered-By;
# rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
# rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
# rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
# location = /.well-known/carddav {
# return 301 $scheme://$host/remote.php/dav;
# }
# location = /.well-known/caldav {
# return 301 $scheme://$host/remote.php/dav;
# }
# location = /robots.txt {
# allow all;
# log_not_found off;
# access_log off;
# }
# location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
# deny all;
# }
# location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
# deny all;
# }
# tls:
# - secretName: wa-stack-nextcloud-tls
# hosts:
# - cloud.my-domain.io
# labels: {}
# Allow configuration of lifecycle hooks
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
lifecycle: {}
# postStartCommand: []
# preStopCommand: []
nextcloud:
host: cloud.my-domain.io
# username: admin
# password: changeme
# Use an existing secret
existingSecret:
enabled: true
secretName: nextcloud-secret
# for initiator
usernameKey: username
passwordKey: password
# secretName: nameofsecret
# usernameKey: username
# passwordKey: password
# smtpUsernameKey: smtp_username
# smtpPasswordKey: smtp_password
update: 0
datadir: /var/www/html/data
tableprefix: wa
persistence:
subPath:
mail:
enabled: false
fromAddress: user
domain: domain.com
smtp:
host: domain.com
secure: ssl
port: 465
authtype: LOGIN
name: user
password: pass
# PHP Configuration files
# Will be injected in /usr/local/etc/php/conf.d for apache image and in /usr/local/etc/php-fpm.d when nginx.enabled: true
phpConfigs: {}
# Default config files
# IMPORTANT: Will be used only if you put extra configs, otherwise default will come from nextcloud itself
# Default confgurations can be found here: https://github.com/nextcloud/docker/tree/master/16.0/apache/config
defaultConfigs:
# To protect /var/www/html/config
.htaccess: true
# Redis default configuration
redis.config.php: true
# Apache configuration for rewrite urls
apache-pretty-urls.config.php: true
# Define APCu as local cache
apcu.config.php: true
# Apps directory configs
apps.config.php: true
# Used for auto configure database
autoconfig.php: true
# SMTP default configuration
smtp.config.php: true
# Extra config files created in /var/www/html/config/
# ref: https://docs.nextcloud.com/server/15/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
configs: {}
# For example, to use S3 as primary storage
# ref: https://docs.nextcloud.com/server/13/admin_manual/configuration_files/primary_storage.html#simple-storage-service-s3
#
# configs:
# s3.config.php: |-
# <?php
# $CONFIG = array (
# 'objectstore' => array(
# 'class' => '\\OC\\Files\\ObjectStore\\S3',
# 'arguments' => array(
# 'bucket' => 'my-bucket',
# 'autocreate' => true,
# 'key' => 'xxx',
# 'secret' => 'xxx',
# 'region' => 'us-east-1',
# 'use_ssl' => true
# )
# )
# );
## Strategy used to replace old pods
## IMPORTANT: use with care, it is suggested to leave as that for upgrade purposes
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
strategy:
type: Recreate
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
##
## Extra environment variables
extraEnv:
# - name: SOME_SECRET_ENV
# valueFrom:
# secretKeyRef:
# name: nextcloud
# key: secret_key
# Extra mounts for the pods. Example shown is for connecting a legacy NFS volume
# to NextCloud pods in Kubernetes. This can then be configured in External Storage
extraVolumes:
# - name: nfs
# nfs:
# server: "10.0.0.1"
# path: "/nextcloud_data"
# readOnly: false
extraVolumeMounts:
# - name: nfs
# mountPath: "/legacy_data"
nginx:
## You need to set an fpm version of the image for nextcloud if you want to use nginx!
enabled: false
image:
repository: nginx
tag: alpine
pullPolicy: IfNotPresent
config:
# This generates the default nginx config as per the nextcloud documentation
default: true
# custom: |-
# worker_processes 1;..
resources: {}
internalDatabase:
enabled: false
name: nextcloud
##
## External database configuration
##
externalDatabase:
enabled: true
## Supported database engines: mysql or postgresql
type: mysql
## Database host
host: maria-db-mariadb-primary
## Database user
# user: wa-cloud
# Database password
# password:
## Database name
database: wa-cloud
## Use a existing secret
existingSecret:
enabled: true
secretName: mariadb-secret
usernameKey: db-username
passwordKey: mariadb-password
##
## MariaDB chart configuration
##
mariadb:
## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
enabled: false
db:
name: nextcloud
user: nextcloud
password: changeme
replication:
enabled: false
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
master:
persistence:
enabled: false
# storageClass: ""
accessMode: ReadWriteOnce
size: 8Gi
postgresql:
enabled: false
global:
postgresql:
postgresqlUsername: nextcloud
postgresqlPassword: changeme
postgresqlDatabase: nextcloud
persistence:
enabled: false
# storageClass: ""
redis:
enabled: true
usePassword: false
password: ''
## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#webcron
##
cronjob:
enabled: true
# Nexcloud image is used as default but only curl is needed
image: {}
# repository: nextcloud
# tag: 16.0.3-apache
# pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
# Every 5 minutes
# Note: Setting this to any any other value than 5 minutes might
# cause issues with how nextcloud background jobs are executed
schedule: "*/5 * * * *"
annotations: {}
# Set curl's insecure option if you use e.g. self-signed certificates
curlInsecure: false
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
# If not set, nextcloud deployment one will be set
# resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# If not set, nextcloud deployment one will be set
# nodeSelector: {}
# If not set, nextcloud deployment one will be set
# tolerations: []
# If not set, nextcloud deployment one will be set
# affinity: {}
service:
type: ClusterIP
port: 8080
loadBalancerIP: nil
nodePort: nil
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
# Nextcloud Data (/var/www/html)
enabled: true
annotations: {}
## nextcloud data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "linode-block-storage"
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
accessMode: ReadWriteOnce
size: 20Gi
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
startupProbe:
enabled: false
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 30
successThreshold: 1
## Enable pod autoscaling using HorizontalPodAutoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
##
hpa:
enabled: false
cputhreshold: 60
minPods: 1
maxPods: 10
nodeSelector: {}
tolerations: []
affinity: {}
## Prometheus Exporter / Metrics
##
metrics:
enabled: false
replicaCount: 1
# The metrics exporter needs to know how you serve Nextcloud either http or https
https: true
timeout: 5s
image:
repository: xperimental/nextcloud-exporter
tag: v0.3.0
pullPolicy: IfNotPresent
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
## Metrics exporter pod Annotation and Labels
# podAnnotations: {}
# podLabels: {}
service:
type: ClusterIP
## Use serviceLoadBalancerIP to request a specific static IP,
## otherwise leave blank
# loadBalancerIP:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9205"
labels: {}
And that's my ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 4G
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/server-snippet: |-
server_tokens off;
proxy_hide_header X-Powered-By;
rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
deny all;
}
name: wa-stack-cloud-ingress-nginx
namespace: business
spec:
tls:
- hosts:
- cloud.my-domain.io
secretName: wa-cloud-tls
rules:
- host: cloud.my-domain.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextcloud
port:
number: 8080

How can I make an HELM UPGRADE with specific tag container version?

I am trying through Azure DevOps to launch a Pipeline that specifies the label of a specific version of the container (not latest). How can I do that?
previously to this requirement, I used:
helm upgrade --values=$(System.DefaultWorkingDirectory)/<FOLDER/NAME>.yaml --namespace <NAMESPACE> --install --reset-values --wait <NAME> .
I am trying through Azure DevOps to launch a Pipeline that specifies the label of a specific version of the container (not latest). How can I do that?
At the moment, it gives me errors with the flag "--app-version":
2020-06-25T15:43:51.9947356Z Error: unknown flag: --app-version
2020-06-25T15:43:51.9990453Z
2020-06-25T15:43:52.0054964Z ##[error]Bash exited with code '1'.
Maybe, another way is download from the harbor repository and make a helm roll to a version with these TAG. But I can´t find the way. I can´t see that clear.
YML:
# Default values for consent-sandbox.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
nameSpace: <NAME>-pre
image:
repository: <REPO>
pullPolicy: Always
## Uncomment and remove [] to download image private
imagePullSecrets: []
# - name: <namePullSecret>
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
containers:
portName: http
port: 8080
protocol: TCP
env:
APP_NAME: <NAME>
JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=false -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit
WILY_MOM_PORT: 5001
TZ: Europe/Madrid
spring_cloud_config_uri: https://<CONF>.local
spring_application_name: <NAME>
SPRING_CLOUD_CONFIG_PROFILE: pre
envSecrets: {}
livenessProbe: {}
# path: /
# port: 8080
readinessProbe: {}
# path: /
# port: 8080
service:
type: ClusterIP
portName: http
port: 8080
targetPort: 8080
containerPort: 8080
secret:
jks: <JKS>-jks
jssecacerts: jssecacerts
ingress:
enabled: false
route:
enabled: true
status: ""
# Default values for openshift-route.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
annotations:
# kubernetes.io/acme-tls: "true"
# haproxy.router.openshift.io/timeout: 5000ms
# haproxy.router.openshift.io/ip_whitelist: <IP>
labels:
host: "<HOST>.paas.cloudcenter.corp"
path: ""
wildcardPolicy: None
nameOverride: ""
fullnameOverride: ""
tls:
enabled: true
termination: edge
insecureEdgeTerminationPolicy: "None"
key:
certificate:
caCertificate:
destinationCACertificate:
service:
name: "<NAME"
targetPort: http
weight: 100
alternateBackends: []
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 150m
memory: 1444Mi
requests:
cpu: 100m
memory: 1024Mi
nodeSelector: {}
tolerations: []
affinity: {}
Probably, I need add in the YML:
containers:
- name: my_container
image: my_image:latest
imagePullPolicy: "Always"
CHART:
apiVersion: v2
name: examplename
description: testing
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 1.0.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: latest
but...what can I do if I can´t change the YML?
Finally, I use other way with OC client:
oc patch deploy push-engine --type='json' -p '[{ "op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "registry.sdi.dev.weu.azure.paas.cloudcenter.corp/test-dev/test:0.0.1" }]'

Filebeat on Kubernetes modules are not working

I am using this guide to run filebeat on a Kubernetes cluster.
https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html#_kubernetes_deploy_manifests
filebeat version: 6.6.0
I updated config file with:
filebeat.yml: |-
filebeat.config:
inputs:
# Mounted `filebeat-inputs` configmap:
path: ${path.config}/inputs.d/*.yml
# Reload inputs configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
# To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:
#filebeat.autodiscover:
# providers:
# - type: kubernetes
# hints.enabled: true
filebeat.modules:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
- module: apache2
access:
enabled: true
var.paths: ["/var/log/apache2/access.log*"]
error:
enabled: true
var.paths: ["/var/log/apache2/error.log*"]
But, the logs from the PHP application (/var/log/apache2/error.log) are not being fetched by filebeat. I checked by execing into the filebeat pod and I see that apache2 and nginx modules are not enabled.
How can I set it up correctly in above yaml file.
UPDATE
I updated filebeat config file with below settings:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
templates:
- condition:
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
exclude_lines: ["^\\s+[\\-`('.|_]"] # drop asciiart lines
- condition:
equals:
kubernetes.labels.app: "my-apache-app"
config:
- module: apache2
log:
input:
type: docker
containers.ids:
- "${data.kubernetes.container.id}"
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-modules
namespace: default
labels:
k8s-app: filebeat
data:
apache2.yml: |-
- module: apache2
access:
enabled: true
error:
enabled: true
nginx.yml: |-
- module: nginx
access:
enabled: true
Now, I am logging apache errors in /dev/stderr so that I can see it thru kubectl logs. Logs are fetching over kibana dashboard. But, apache module is still noe visible.
I tried checking with ./filebeat modules list:
Enabled:
apache2
nginx
Disabled:
Kibana Dashboard

How to use Helm to install Kibana with the Logtrail plugin enabled?

I'm new to Helm and Kubernetes and cannot figure out how to use helm install --name kibana --namespace logging stable/kibana with the Logtrail plugin enabled. I can see there's an option in the values.yaml file to enable plugins during installation but I cannot figure out how to set it.
I've tried this without success:
helm install --name kibana --namespace logging stable/kibana \
--set plugins.enabled=true,plugins.value=logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
Update:
As Ryan suggested, it's best to provide such complex settings via a custom values file. But as it turned out, the above mentioned settings are not the only ones that one would have to provide to get the Logtrail plugin working in Kibana. Some configuration for Logtrail must be set before doing the helm install. And here's how to set it. In your custom values file set the following:
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
After that the full content of your custom values file should look similar to this:
image:
repository: "docker.elastic.co/kibana/kibana-oss"
tag: "6.5.4"
pullPolicy: "IfNotPresent"
commandline:
args: []
env: {}
# All Kibana configuration options are adjustable via env vars.
# To adjust a config option to an env var uppercase + replace `.` with `_`
# Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
#
# ELASTICSEARCH_URL: http://elasticsearch-client:9200
# SERVER_PORT: 5601
# LOGGING_VERBOSE: "true"
# SERVER_DEFAULTROUTE: "/app/kibana"
files:
kibana.yml:
## Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
## Custom config properties below
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
# server.port: 5601
# logging.verbose: "true"
# server.defaultRoute: "/app/kibana"
deployment:
annotations: {}
service:
type: ClusterIP
externalPort: 443
internalPort: 5601
# authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
## External IP addresses of service
## Default: nil
##
# externalIPs:
# - 192.168.0.1
#
## LoadBalancer IP if service.type is LoadBalancer
## Default: nil
##
# loadBalancerIP: 10.2.2.2
annotations: {}
# Annotation example: setup ssl with aws cert when service.type is LoadBalancer
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
labels: {}
## Label example: show service URL in `kubectl cluster-info`
# kubernetes.io/cluster-service: "true"
## Limit load balancer source ips to list of CIDRs (where available)
# loadBalancerSourceRanges: []
ingress:
enabled: false
# hosts:
# - kibana.localhost.localdomain
# - localhost.localdomain/kibana
# annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# If set and create is false, the service account must be existing
name:
livenessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 5
# Enable an authproxy. Specify container in extraContainers
authProxyEnabled: false
extraContainers: |
# - name: proxy
# image: quay.io/gambol99/keycloak-proxy:latest
# args:
# - --resource=uri=/*
# - --discovery-url=https://discovery-url
# - --client-id=client
# - --client-secret=secret
# - --listen=0.0.0.0:5602
# - --upstream-url=http://127.0.0.1:5601
# ports:
# - name: web
# containerPort: 9090
resources: {}
# limits:
# cpu: 100m
# memory: 300Mi
# requests:
# cpu: 100m
# memory: 300Mi
priorityClassName: ""
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
replicaCount: 1
revisionHistoryLimit: 3
# To export a dashboard from a running Kibana 6.3.x use:
# curl --user <username>:<password> -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard=<some-dashboard-uuid> > my-dashboard.json
# A dashboard is defined by a name and a string with the json payload or the download url
dashboardImport:
timeout: 60
xpackauth:
enabled: false
username: myuser
password: mypass
dashboards: {}
# k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
# List of plugins to install using initContainer
# NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
plugins:
# set to true to enable plugins installation
enabled: false
# set to true to remove all kibana plugins before installation
reset: false
# Use <plugin_name,version,url> to add/upgrade plugin
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
# - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
# - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
# - other_plugin
persistentVolumeClaim:
# set to true to use pvc
enabled: false
# set to true to use you own pvc
existingClaim: false
annotations: {}
accessModes:
- ReadWriteOnce
size: "5Gi"
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
# default security context
securityContext:
enabled: false
allowPrivilegeEscalation: false
runAsUser: 1000
fsGroup: 2000
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
And the last thing you should do is add this ConfigMap resource to Kubernetes:
apiVersion: v1
kind: ConfigMap
metadata:
name: logtrail
namespace: logging
data:
logtrail.json: |
{
"version" : 1,
"index_patterns" : [
{
"es": {
"default_index": "logstash-*"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "MMM DD HH:mm:ss",
"max_buckets": 500,
"default_time_range_in_days" : 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"fields" : {
"mapping" : {
"timestamp" : "#timestamp",
"hostname" : "kubernetes.host",
"program": "kubernetes.pod_name",
"message": "log"
},
"message_format": "{{{log}}}"
},
"color_mapping" : {
}
}]
}
After that you're ready to helm install with the values file specified via the -f flag.
Getting input with --set that matches to what the example in the values file has is a bit tricky. Following the example we want the values to be:
plugins:
enabled: true
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
The plugin.values here is tricky because it is an array, which means you need to enclose with {}. And the relevant entry contains commas, which have to be escaped with backslash. To get it to match you can use:
helm install --name kibana --namespace logging stable/kibana --set plugins.enabled=true,plugins.values={"logtrail\,0.1.30\,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip"}
If you add --dry-run --debug then you can see what the computed values are for any command you run, including with --set, so this can help check the match. This kind of value is easier to set with a custom values file referenced with -f as it avoids having to work out how the --set evaluates to values.