I created a serviceemointer using jsonexporter in Prometheus environment, but the metrics could not be verified. Is there a way to check the metric? - kubernetes

I am a beginner who is using Prometheus and Grapana to monitor the value of REST API.
Prometheus, json-exporrter, and grafana both used the Helm chart, Prometheus installed as default values.yaml, and json-exporter installed as custom values.yaml.
I checked that the prometheus set the service monitor of json-exporter as a target, but I couldn't check its metrics.
How can I check the metrics? Below is the environment , screenshots and code.
environment :
kubernetes : v1.22.9
helm : v3.9.2
prometheus-json-exporter helm chart : v0.5.0
kube-prometheus-stack helm chart : 0.58.0
screenshots :
https://drive.google.com/drive/folders/1vfjbidNpE2_yXfxdX8oX5eWh4-wAx7Ql?usp=sharing
values.yaml
in custom_jsonexporter_values.yaml
# Default values for prometheus-json-exporter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: quay.io/prometheuscommunity/json-exporter
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: []
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: []
podSecurityContext: {}
# fsGroup: 2000
# podLabels:
# Custom labels for the pod
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 7979
targetPort: http
name: http
serviceMonitor:
## If true, a ServiceMonitor CRD is created for a prometheus operator
## https://github.com/coreos/prometheus-operator
##
enabled: true
namespace: monitoring
scheme: http
# Default values that will be used for all ServiceMonitors created by `targets`
defaults:
additionalMetricsRelabels: {}
interval: 60s
labels:
release: prometheus
scrapeTimeout: 60s
targets:
- name : pi2
url: http://xxx.xxx.xxx.xxx:xxxx
labels: {} # Map of labels for ServiceMonitor. Overrides value set in `defaults`
interval: 60s # Scraping interval. Overrides value set in `defaults`
scrapeTimeout: 60s # Scrape timeout. Overrides value set in `defaults`
additionalMetricsRelabels: {} # Map of metric labels and values to add
ingress:
enabled: false
className: ""
annotations: []
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: []
tolerations: []
affinity: []
configuration:
config: |
---
modules:
default:
metrics:
- name: used_storage_byte
path: '{ .used }'
help: used storage byte
values:
used : '{ .used }'
labels: {}
- name: free_storage_byte
path: '{ .free }'
help: free storage byte
labels: {}
values :
free : '{ .free }'
- name: total_storage_byte
path: '{ .total }'
help: total storage byte
labels: {}
values :
total : '{ .total }'
prometheusRule:
enabled: false
additionalLabels: {}
namespace: ""
rules: []
additionalVolumes: []
# - name: password-file
# secret:
# secretName: secret-name
additionalVolumeMounts: []
# - name: password-file
# mountPath: "/tmp/mysecret.txt"
# subPath: mysecret.txt

Firstly you can check the targets page on the Prometheus UI to see if a) your desired target is even defined and b) if the endpoint is reachable and being scraped.
However, you may need to troubleshoot a little if either of the above is not the case:
It is important to understand what is happening. You have deployed a Prometheus Operator to the cluster. If you have used the default values from the helm chart, you also deployed a Prometheus custom resource(CR). This instance is what is telling the Prometheus Operator how to ultimately configure the Prometheus running inside the pod. Certain things are static, like global metric relabeling for example, but most are dynamic, such as picking up new targets to actually scrape. Inside the Prometheus CR you will find options to specify serviceMonitorSelector and serviceMonitorNamespaceSelector (The behaviour is the same also for probes and podmonitors so I'm just going over it once). Assuming you leave the default set like serviceMonitorNamespaceSelector: {}, Prometheus Operator will look for ServiceMonitors in all namespaces on the cluster to which it has access via its serviceAccount. The serviceMonitorSelector field lets you specify a label and value combination that must be present on a serviceMonitor that must be present for it to be picked up. Once a or multiple serviceMonitors are found, that match the criteria in the selectors, Prometheus Operator adjusts the configuration in the actual Prometheus instance(tl;dr version) so you end up with proper scrape targets.
Step 1 for trouble shooting: Do your selectors match the labels and namespace of the serviceMonitors? Actually check those. The default on the prometheus operator helm chart expects a label release: prometheus-operator and in your config, you don't seem to add that to your json-exporter's serviceMonitor.
Step 2: The same behaviour as outline for how serviceMonitors are picked up, is happening in turn inside the serviceMonitor itself, so make sure that your service actually matches what is specced out in the serviceMonitor.
To deep dive further into the options you have and what the fields do, check the API documentation.

Related

Installing nextcloud (helm) on linode k8s (v1.19) cluster with nginx-ingress tls/ssl encryption (let's encrypt) dosen't work as expected

I tried to install nextcloud on a Linode k8s managed cluster with helm. The installation process was no problem. Everything worked fine. My nextcloud instance is tls encrypted, but when I try to log in, nothing happens. In the console of the browser I only get the information:
Refused to send form data to 'http://cloud.my-domain.io/' because it violates the following Content Security Policy directive: "form-action 'self'".
I figured out, that this has probably to do with the fact that nextcloud has a problem with the reverse proxy of my k8s Nginx ingress. I tried out to solve the problem by adding 'overwriteprotocol' => 'https' to my config.php. Then the error message disappears, but I didn't get forward to the account page, I still see only the login page.
I also tried out to use the Nginx ingress of the nextcloud helm chart itself. But that doesn't work at all. When I try to access nextcloud within the browser I only get the information, that the instance is unsafe. No login page, no nextcloud page at all. Just the blank browser page.
In the end, I tried to use nextcloud without any tls, and that works quite fine. But of course, that's not what I want. I wanna have a secure connection to nextcloud.
I have no ideas what else I can do. What I'm doing wrong? I really hope that someone can help me to solve the problem. Thank's for your help!
That's my helm values.yaml:
## Official nextcloud image version
## ref: https://hub.docker.com/r/library/nextcloud/tags/
##
image:
repository: nextcloud
tag: 19.0.3-apache
pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
nameOverride: ""
fullnameOverride: ""
# Number of replicas to be deployed
replicaCount: 1
## Allowing use of ingress controllers
## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
enabled: false
# metadata:
# annotations:
# kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/rewrite-target: /
# nginx.ingress.kubernetes.io/proxy-body-size: 4G
# kubernetes.io/tls-acme: "true"
# certmanager.k8s.io/cluster-issuer: letsencrypt-prod
# nginx.ingress.kubernetes.io/server-snippet: |-
# server_tokens off;
# proxy_hide_header X-Powered-By;
# rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
# rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
# rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
# location = /.well-known/carddav {
# return 301 $scheme://$host/remote.php/dav;
# }
# location = /.well-known/caldav {
# return 301 $scheme://$host/remote.php/dav;
# }
# location = /robots.txt {
# allow all;
# log_not_found off;
# access_log off;
# }
# location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
# deny all;
# }
# location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
# deny all;
# }
# tls:
# - secretName: wa-stack-nextcloud-tls
# hosts:
# - cloud.my-domain.io
# labels: {}
# Allow configuration of lifecycle hooks
# ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
lifecycle: {}
# postStartCommand: []
# preStopCommand: []
nextcloud:
host: cloud.my-domain.io
# username: admin
# password: changeme
# Use an existing secret
existingSecret:
enabled: true
secretName: nextcloud-secret
# for initiator
usernameKey: username
passwordKey: password
# secretName: nameofsecret
# usernameKey: username
# passwordKey: password
# smtpUsernameKey: smtp_username
# smtpPasswordKey: smtp_password
update: 0
datadir: /var/www/html/data
tableprefix: wa
persistence:
subPath:
mail:
enabled: false
fromAddress: user
domain: domain.com
smtp:
host: domain.com
secure: ssl
port: 465
authtype: LOGIN
name: user
password: pass
# PHP Configuration files
# Will be injected in /usr/local/etc/php/conf.d for apache image and in /usr/local/etc/php-fpm.d when nginx.enabled: true
phpConfigs: {}
# Default config files
# IMPORTANT: Will be used only if you put extra configs, otherwise default will come from nextcloud itself
# Default confgurations can be found here: https://github.com/nextcloud/docker/tree/master/16.0/apache/config
defaultConfigs:
# To protect /var/www/html/config
.htaccess: true
# Redis default configuration
redis.config.php: true
# Apache configuration for rewrite urls
apache-pretty-urls.config.php: true
# Define APCu as local cache
apcu.config.php: true
# Apps directory configs
apps.config.php: true
# Used for auto configure database
autoconfig.php: true
# SMTP default configuration
smtp.config.php: true
# Extra config files created in /var/www/html/config/
# ref: https://docs.nextcloud.com/server/15/admin_manual/configuration_server/config_sample_php_parameters.html#multiple-config-php-file
configs: {}
# For example, to use S3 as primary storage
# ref: https://docs.nextcloud.com/server/13/admin_manual/configuration_files/primary_storage.html#simple-storage-service-s3
#
# configs:
# s3.config.php: |-
# <?php
# $CONFIG = array (
# 'objectstore' => array(
# 'class' => '\\OC\\Files\\ObjectStore\\S3',
# 'arguments' => array(
# 'bucket' => 'my-bucket',
# 'autocreate' => true,
# 'key' => 'xxx',
# 'secret' => 'xxx',
# 'region' => 'us-east-1',
# 'use_ssl' => true
# )
# )
# );
## Strategy used to replace old pods
## IMPORTANT: use with care, it is suggested to leave as that for upgrade purposes
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
strategy:
type: Recreate
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
##
## Extra environment variables
extraEnv:
# - name: SOME_SECRET_ENV
# valueFrom:
# secretKeyRef:
# name: nextcloud
# key: secret_key
# Extra mounts for the pods. Example shown is for connecting a legacy NFS volume
# to NextCloud pods in Kubernetes. This can then be configured in External Storage
extraVolumes:
# - name: nfs
# nfs:
# server: "10.0.0.1"
# path: "/nextcloud_data"
# readOnly: false
extraVolumeMounts:
# - name: nfs
# mountPath: "/legacy_data"
nginx:
## You need to set an fpm version of the image for nextcloud if you want to use nginx!
enabled: false
image:
repository: nginx
tag: alpine
pullPolicy: IfNotPresent
config:
# This generates the default nginx config as per the nextcloud documentation
default: true
# custom: |-
# worker_processes 1;..
resources: {}
internalDatabase:
enabled: false
name: nextcloud
##
## External database configuration
##
externalDatabase:
enabled: true
## Supported database engines: mysql or postgresql
type: mysql
## Database host
host: maria-db-mariadb-primary
## Database user
# user: wa-cloud
# Database password
# password:
## Database name
database: wa-cloud
## Use a existing secret
existingSecret:
enabled: true
secretName: mariadb-secret
usernameKey: db-username
passwordKey: mariadb-password
##
## MariaDB chart configuration
##
mariadb:
## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
enabled: false
db:
name: nextcloud
user: nextcloud
password: changeme
replication:
enabled: false
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
master:
persistence:
enabled: false
# storageClass: ""
accessMode: ReadWriteOnce
size: 8Gi
postgresql:
enabled: false
global:
postgresql:
postgresqlUsername: nextcloud
postgresqlPassword: changeme
postgresqlDatabase: nextcloud
persistence:
enabled: false
# storageClass: ""
redis:
enabled: true
usePassword: false
password: ''
## Cronjob to execute Nextcloud background tasks
## ref: https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html#webcron
##
cronjob:
enabled: true
# Nexcloud image is used as default but only curl is needed
image: {}
# repository: nextcloud
# tag: 16.0.3-apache
# pullPolicy: IfNotPresent
# pullSecrets:
# - myRegistrKeySecretName
# Every 5 minutes
# Note: Setting this to any any other value than 5 minutes might
# cause issues with how nextcloud background jobs are executed
schedule: "*/5 * * * *"
annotations: {}
# Set curl's insecure option if you use e.g. self-signed certificates
curlInsecure: false
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
# If not set, nextcloud deployment one will be set
# resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# If not set, nextcloud deployment one will be set
# nodeSelector: {}
# If not set, nextcloud deployment one will be set
# tolerations: []
# If not set, nextcloud deployment one will be set
# affinity: {}
service:
type: ClusterIP
port: 8080
loadBalancerIP: nil
nodePort: nil
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
# Nextcloud Data (/var/www/html)
enabled: true
annotations: {}
## nextcloud data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "linode-block-storage"
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
# existingClaim:
accessMode: ReadWriteOnce
size: 20Gi
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
## Liveness and readiness probe values
## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
startupProbe:
enabled: false
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 30
successThreshold: 1
## Enable pod autoscaling using HorizontalPodAutoscaler
## ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
##
hpa:
enabled: false
cputhreshold: 60
minPods: 1
maxPods: 10
nodeSelector: {}
tolerations: []
affinity: {}
## Prometheus Exporter / Metrics
##
metrics:
enabled: false
replicaCount: 1
# The metrics exporter needs to know how you serve Nextcloud either http or https
https: true
timeout: 5s
image:
repository: xperimental/nextcloud-exporter
tag: v0.3.0
pullPolicy: IfNotPresent
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
## Metrics exporter pod Annotation and Labels
# podAnnotations: {}
# podLabels: {}
service:
type: ClusterIP
## Use serviceLoadBalancerIP to request a specific static IP,
## otherwise leave blank
# loadBalancerIP:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9205"
labels: {}
And that's my ingress rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 4G
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/server-snippet: |-
server_tokens off;
proxy_hide_header X-Powered-By;
rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
deny all;
}
name: wa-stack-cloud-ingress-nginx
namespace: business
spec:
tls:
- hosts:
- cloud.my-domain.io
secretName: wa-cloud-tls
rules:
- host: cloud.my-domain.io
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextcloud
port:
number: 8080

How can I make an HELM UPGRADE with specific tag container version?

I am trying through Azure DevOps to launch a Pipeline that specifies the label of a specific version of the container (not latest). How can I do that?
previously to this requirement, I used:
helm upgrade --values=$(System.DefaultWorkingDirectory)/<FOLDER/NAME>.yaml --namespace <NAMESPACE> --install --reset-values --wait <NAME> .
I am trying through Azure DevOps to launch a Pipeline that specifies the label of a specific version of the container (not latest). How can I do that?
At the moment, it gives me errors with the flag "--app-version":
2020-06-25T15:43:51.9947356Z Error: unknown flag: --app-version
2020-06-25T15:43:51.9990453Z
2020-06-25T15:43:52.0054964Z ##[error]Bash exited with code '1'.
Maybe, another way is download from the harbor repository and make a helm roll to a version with these TAG. But I can´t find the way. I can´t see that clear.
YML:
# Default values for consent-sandbox.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
nameSpace: <NAME>-pre
image:
repository: <REPO>
pullPolicy: Always
## Uncomment and remove [] to download image private
imagePullSecrets: []
# - name: <namePullSecret>
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name:
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
containers:
portName: http
port: 8080
protocol: TCP
env:
APP_NAME: <NAME>
JAVA_OPTS_EXT: -Djava.security.egd=file:/dev/./urandom -Dcom.sun.net.ssl.checkRevocation=false -Djavax.net.ssl.trustStore=/etc/truststore/jssecacerts -Djavax.net.ssl.trustStorePassword=changeit
WILY_MOM_PORT: 5001
TZ: Europe/Madrid
spring_cloud_config_uri: https://<CONF>.local
spring_application_name: <NAME>
SPRING_CLOUD_CONFIG_PROFILE: pre
envSecrets: {}
livenessProbe: {}
# path: /
# port: 8080
readinessProbe: {}
# path: /
# port: 8080
service:
type: ClusterIP
portName: http
port: 8080
targetPort: 8080
containerPort: 8080
secret:
jks: <JKS>-jks
jssecacerts: jssecacerts
ingress:
enabled: false
route:
enabled: true
status: ""
# Default values for openshift-route.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
annotations:
# kubernetes.io/acme-tls: "true"
# haproxy.router.openshift.io/timeout: 5000ms
# haproxy.router.openshift.io/ip_whitelist: <IP>
labels:
host: "<HOST>.paas.cloudcenter.corp"
path: ""
wildcardPolicy: None
nameOverride: ""
fullnameOverride: ""
tls:
enabled: true
termination: edge
insecureEdgeTerminationPolicy: "None"
key:
certificate:
caCertificate:
destinationCACertificate:
service:
name: "<NAME"
targetPort: http
weight: 100
alternateBackends: []
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 150m
memory: 1444Mi
requests:
cpu: 100m
memory: 1024Mi
nodeSelector: {}
tolerations: []
affinity: {}
Probably, I need add in the YML:
containers:
- name: my_container
image: my_image:latest
imagePullPolicy: "Always"
CHART:
apiVersion: v2
name: examplename
description: testing
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
version: 1.0.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: latest
but...what can I do if I can´t change the YML?
Finally, I use other way with OC client:
oc patch deploy push-engine --type='json' -p '[{ "op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "registry.sdi.dev.weu.azure.paas.cloudcenter.corp/test-dev/test:0.0.1" }]'

Is it safe to install a Sonarqube helm chart in a existing PostgreSQL database?

I have to install a sonarqube helm chart with postgresql persistence pointing to a external database. This database server is already being used and the chart is configured as below (IP and password changed for security reasons). My ideia is to create a sonarDB database and install the chart. Would it be safe or there would be a risk?
# Default values for sonarqube.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
# This will use the default deployment strategy unless it is overriden
deploymentStrategy: {}
image:
repository: sonarqube
tag: 7.9.1-community
# If using a private repository, the name of the imagePullSecret to use
# pullSecret: my-repo-secret
# Set security context for sonarqube pod
securityContext:
fsGroup: 999
# Settings to configure elasticsearch host requirements
elasticsearch:
configureNode: true
bootstrapChecks: true
service:
type: ClusterIP
externalPort: 9000
internalPort: 9000
labels:
annotations: {}
# May be used in example for internal load balancing in GCP:
# cloud.google.com/load-balancer-type: Internal
# loadBalancerSourceRanges:
# - 0.0.0.0/0
# loadBalancerIP: 1.2.3.4
ingress:
enabled: false
# Used to create an Ingress record.
hosts:
- name: sonar.organization.com
# default paths for "/" and "/*" will be added
path: /
# If a different path is defined, that path and {path}/* will be added to the ingress resource
# path: /sonarqube
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# This property allows for reports up to a certain size to be uploaded to SonarQube
# nginx.ingress.kubernetes.io/proxy-body-size: "8m"
# Additional labels for Ingress manifest file
# labels:
# traffic-type: external
# traffic-type: internal
tls: []
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# hostAliases allows the modification of the hosts file inside a container
hostAliases: []
# - ip: "192.168.1.10"
# hostnames:
# - "example.com"
# - "www.example.com"
readinessProbe:
initialDelaySeconds: 60
periodSeconds: 30
failureThreshold: 6
# If an ingress *path* other than the root (/) is defined, it should be reflected here
# A trailing "/" must be included
sonarWebContext: /
# sonarWebContext: /sonarqube/
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 30
# If an ingress *path* other than the root (/) is defined, it should be reflected here
# A trailing "/" must be included
sonarWebContext: /
# sonarWebContext: /sonarqube/
# Set extra env variables. Like proxy settings.
extraEnv: {}
# If an ingress *path* is defined, it should be reflected here
# sonar.web.context: /sonarqube
# Set annotations for pods
annotations: {}
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
enabled: false
## Set annotations on pvc
annotations: {}
## Specify an existing volume claim instead of creating a new one.
## When using this option all following options like storageClass, accessMode and size are ignored.
#existingClaim: gke-homolog-sonarqube
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass:
accessMode: ReadWriteOnce
size: 10Gi
# List of plugins to install.
# For example:
plugins:
install:
- "https://github.com/sleroy/sonar-slack-notifier-plugin/releases/download/2.5/cks-slack-notifier-2.5.jar"
- "https://repo1.maven.org/maven2/org/sonarsource/java/sonar-java-plugin/5.14.0.18788/sonar-java-plugin-5.14.0.18788.jar"
#plugins:
#install: []
# initContainerImage: alpine:3.10.3
# deleteDefaultPlugins: true
#resources: {}
# We allow the plugins init container to have a separate resources declaration because
# the initContainer does not take as much resources.
# A custom sonar.properties file can be provided via dictionary.
# For example:
# sonarProperties:
# sonar.forceAuthentication: true
# sonar.security.realm: LDAP
# ldap.url: ldaps://organization.com
# Additional sonar properties to load from a secret with a key "secret.properties" (must be a string)
# sonarSecretProperties:
# Kubernetes secret that contains the encryption key for the sonarqube instance.
# The secret must contain the key 'sonar-secret.txt'.
# The 'sonar.secretKeyPath' property will be set automatically.
# sonarSecretKey: "settings-encryption-secret"
customCerts:
## Enable to override the default cacerts with your own one
enabled: false
secretName: my-cacerts
## Configuration value to select database type
## Option to use "postgresql" or "mysql" database type, by default "postgresql" is chosen
## Set the "enable" field to true of the database type you select (if you want to use internal database) and false of the one you don't select
#database:
# type: "postgresql"
## Configuration values for postgresql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
postgresql:
# Enable to deploy the PostgreSQL chart
enabled: false
# To use an external PostgreSQL instance, set enabled to false and uncomment
# the line below:
postgresServer: "11.31.76.3"
# To use an external secret for the password for an external PostgreSQL
# instance, set enabled to false and provide the name of the secret on the
# line below:
# postgresPasswordSecret: ""
postgresUser: "application"
postgresPassword: "pass123"
postgresDatabase: "sonarDB"
# Specify the TCP port that PostgreSQL should use
service:
port: 5432
## Configuration values for the mysql dependency
## ref: https://github.com/kubernetes/charts/blob/master/stable/mysql/README.md
##
mysql:
# Enable to deploy the mySQL chart
enabled: false
# To use an external mySQL instance, set enabled to false and uncomment
# the line below:
# mysqlServer: ""
# To use an external secret for the password for an external mySQL instance,
# set enabled to false and provide the name of the secret on the line below:
# mysqlPasswordSecret: ""
mysqlUser: "sonarUser"
mysqlPassword: "sonarPass"
mysqlDatabase: "sonarDB"
# mysqlParams:
# useSSL: "true"
# Specify the TCP port that mySQL should use
service:
port: 3306
#
# Additional labels to add to the pods:
# podLabels:
# key: value
podLabels: {}
# For compatibility with 8.0 replace by "/opt/sq"
sonarqubeFolder: /opt/sonarqube
If you match the current version of Sonarqube your existing database is using, then I doubt that you'd have a problem. The helm chart out of the box brings in a community edition. So get the correct image tag to use from docker hub.

How to deploy mongoDB on K8S having persistent volume connected to Azure File Service

I need to set up MongoDb on my K8S cluster in Azure and to have data stored in the Azure File Service. I am trying to do so with the helm and the following files:
1. StorageClass (account is in the same resource group as my k8s cluster)
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: azurefilestorage
namespace: mongodb
provisioner: kubernetes.io/azure-file
parameters:
storageAccount: mongodb
values.yaml
## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
image:
## Bitnami MongoDB registry
##
registry: docker.io
## Bitnami MongoDB image name
##
repository: bitnami/mongodb
## Bitnami MongoDB image tag
## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
##
tag: 4.0.10-debian-9-r13
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns NAMI debugging in minideb
## ref: https://github.com/bitnami/minideb-extras/#turn-on-nami-debugging
debug: false
## Enable authentication
## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/
#
usePassword: true
# existingSecret: name-of-existing-secret
## MongoDB admin password
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run
##
# mongodbRootPassword:
## MongoDB custom user and database
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-a-user-and-database-on-first-run
##
# mongodbUsername: username
# mongodbPassword: password
# mongodbDatabase: database
## Whether enable/disable IPv6 on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
##
mongodbEnableIPv6: true
## Whether enable/disable DirectoryPerDB on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
##
mongodbDirectoryPerDB: false
## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
mongodbSystemLogVerbosity: 0
mongodbDisableSystemLog: false
## MongoDB additional command line flags
##
## Can be used to specify command line flags, for example:
##
## mongodbExtraFlags:
## - "--wiredTigerCacheSizeGB=2"
mongodbExtraFlags: []
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
## Kubernetes Cluster Domain
clusterDomain: cluster.local
## Kubernetes service type
service:
annotations: {}
type: LoadBalancer
# clusterIP: None
port: 27017
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort: 30123
## Specify the externalIP value ClusterIP service type.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
# externalIPs: []
## Specify the loadBalancerIP value for LoadBalancer service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
##
# loadBalancerIP:
## Setting up replication
## ref: https://github.com/bitnami/bitnami-docker-mongodb#setting-up-a-replication
#
replicaSet:
## Whether to create a MongoDB replica set for high availability or not
enabled: true
useHostnames: true
## Name of the replica set
##
name: rs0
## Key used for replica set authentication
##
# key: key
## Number of replicas per each node type
##
replicas:
secondary: 1
arbiter: 1
## Pod Disruption Budget
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
pdb:
minAvailable:
primary: 1
secondary: 1
arbiter: 1
# Annotations to be added to MongoDB pods
podAnnotations: {}
# Additional pod labels to apply
podLabels: {}
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
## Pod priority
## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# priorityClassName: ""
## Node selector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
## Affinity
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
## Tolerations
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
## updateStrategy for MongoDB Primary, Secondary and Arbitrer statefulsets
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
type: RollingUpdate
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
##
# existingClaim:
## The path the volume will be mounted at, useful when using different
## MongoDB images.
##
mountPath: /bitnami/mongodb
## The subdirectory of the volume to mount to, useful in dev environments
## and one PV for multiple services.
##
subPath: ""
## mongodb data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: azurefilestorage
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
# Expose mongodb via ingress. This is possible if using nginx-ingress
# https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
ingress:
enabled: false
annotations: {}
labels: {}
paths:
- /
hosts: []
tls:
- secretName: tls-cert
hosts: []
## Configure the options for init containers to be run before the main app containers
## are started. All init containers are run sequentially and must exit without errors
## for the next one to be started.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
# extraInitContainers: |
# - name: do-something
# image: busybox
# command: ['do', 'something']
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
# Define custom config map with init scripts
initConfigMap: {}
# name: "init-config-map"
# Entries for the MongoDB config file
configmap:
# # Where and how to store data.
# storage:
# dbPath: /opt/bitnami/mongodb/data/db
# journal:
# enabled: true
# #engine:
# #wiredTiger:
# # where to write logging data.
# systemLog:
# destination: file
# logAppend: true
# path: /opt/bitnami/mongodb/logs/mongodb.log
# # network interfaces
# net:
# port: 27017
# bindIp: 0.0.0.0
# unixDomainSocket:
# enabled: true
# pathPrefix: /opt/bitnami/mongodb/tmp
# # replica set options
# #replication:
# # replSetName: replicaset
# # process management options
# processManagement:
# fork: false
# pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid
# # set parameter options
# setParameter:
# enableLocalhostAuthBypass: true
# # security options
# security:
# authorization: enabled
# #keyFile: /opt/bitnami/mongodb/conf/keyfile
## Prometheus Exporter / Metrics
##
metrics:
enabled: true
image:
registry: docker.io
repository: forekshub/percona-mongodb-exporter
tag: latest
pullPolicy: Always
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## String with extra arguments to the metrics exporter
## ref: https://github.com/dcu/mongodb_exporter/blob/master/mongodb_exporter.go
extraArgs: ""
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
## Metrics exporter liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
enabled: true
initialDelaySeconds: 15
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 1
failureThreshold: 3
successThreshold: 1
## Metrics exporter pod Annotation
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9216"
## Prometheus Service Monitor
## ref: https://github.com/coreos/prometheus-operator
## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md
serviceMonitor:
## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
enabled: true
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
additionalLabels: {}
## Specify Metric Relabellings to add to the scrape endpoint
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
# relabellings:
alerting:
## Define individual alerting rules as required
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
## https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
rules: {}
## Used to pass Labels that are used by the Prometheus installed in your cluster to select Prometheus Rules to work with
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
additionalLabels: {}
If I uncomment the line 211 with:
storageClass: azurefilestorage
and hit
helm upgrade mongodb-dev stable/mongodb -f dev_values.yaml
I am getting the following error:
Error: UPGRADE FAILED: StatefulSet.apps "mongodb-dev-primary" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden. && StatefulSet.apps "mongodb-dev-secondary" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden.
Any hint where is the problem and how to have mongodb with azure file service connected?
well, this has nothing to do with azure files. the error tells you that stateful sets cant be updated. so you'd need to delete it and create it from scratch

How to use Helm to install Kibana with the Logtrail plugin enabled?

I'm new to Helm and Kubernetes and cannot figure out how to use helm install --name kibana --namespace logging stable/kibana with the Logtrail plugin enabled. I can see there's an option in the values.yaml file to enable plugins during installation but I cannot figure out how to set it.
I've tried this without success:
helm install --name kibana --namespace logging stable/kibana \
--set plugins.enabled=true,plugins.value=logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
Update:
As Ryan suggested, it's best to provide such complex settings via a custom values file. But as it turned out, the above mentioned settings are not the only ones that one would have to provide to get the Logtrail plugin working in Kibana. Some configuration for Logtrail must be set before doing the helm install. And here's how to set it. In your custom values file set the following:
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
After that the full content of your custom values file should look similar to this:
image:
repository: "docker.elastic.co/kibana/kibana-oss"
tag: "6.5.4"
pullPolicy: "IfNotPresent"
commandline:
args: []
env: {}
# All Kibana configuration options are adjustable via env vars.
# To adjust a config option to an env var uppercase + replace `.` with `_`
# Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
#
# ELASTICSEARCH_URL: http://elasticsearch-client:9200
# SERVER_PORT: 5601
# LOGGING_VERBOSE: "true"
# SERVER_DEFAULTROUTE: "/app/kibana"
files:
kibana.yml:
## Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
elasticsearch.url: http://elasticsearch:9200
## Custom config properties below
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
# server.port: 5601
# logging.verbose: "true"
# server.defaultRoute: "/app/kibana"
deployment:
annotations: {}
service:
type: ClusterIP
externalPort: 443
internalPort: 5601
# authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
## External IP addresses of service
## Default: nil
##
# externalIPs:
# - 192.168.0.1
#
## LoadBalancer IP if service.type is LoadBalancer
## Default: nil
##
# loadBalancerIP: 10.2.2.2
annotations: {}
# Annotation example: setup ssl with aws cert when service.type is LoadBalancer
# service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
labels: {}
## Label example: show service URL in `kubectl cluster-info`
# kubernetes.io/cluster-service: "true"
## Limit load balancer source ips to list of CIDRs (where available)
# loadBalancerSourceRanges: []
ingress:
enabled: false
# hosts:
# - kibana.localhost.localdomain
# - localhost.localdomain/kibana
# annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
serviceAccount:
# Specifies whether a service account should be created
create: false
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
# If set and create is false, the service account must be existing
name:
livenessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
enabled: false
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 10
successThreshold: 5
# Enable an authproxy. Specify container in extraContainers
authProxyEnabled: false
extraContainers: |
# - name: proxy
# image: quay.io/gambol99/keycloak-proxy:latest
# args:
# - --resource=uri=/*
# - --discovery-url=https://discovery-url
# - --client-id=client
# - --client-secret=secret
# - --listen=0.0.0.0:5602
# - --upstream-url=http://127.0.0.1:5601
# ports:
# - name: web
# containerPort: 9090
resources: {}
# limits:
# cpu: 100m
# memory: 300Mi
# requests:
# cpu: 100m
# memory: 300Mi
priorityClassName: ""
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
# affinity: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
podAnnotations: {}
replicaCount: 1
revisionHistoryLimit: 3
# To export a dashboard from a running Kibana 6.3.x use:
# curl --user <username>:<password> -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard=<some-dashboard-uuid> > my-dashboard.json
# A dashboard is defined by a name and a string with the json payload or the download url
dashboardImport:
timeout: 60
xpackauth:
enabled: false
username: myuser
password: mypass
dashboards: {}
# k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
# List of plugins to install using initContainer
# NOTE : We notice that lower resource constraints given to the chart + plugins are likely not going to work well.
plugins:
# set to true to enable plugins installation
enabled: false
# set to true to remove all kibana plugins before installation
reset: false
# Use <plugin_name,version,url> to add/upgrade plugin
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip
# - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
# - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
# - other_plugin
persistentVolumeClaim:
# set to true to use pvc
enabled: false
# set to true to use you own pvc
existingClaim: false
annotations: {}
accessModes:
- ReadWriteOnce
size: "5Gi"
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "-"
# default security context
securityContext:
enabled: false
allowPrivilegeEscalation: false
runAsUser: 1000
fsGroup: 2000
extraConfigMapMounts:
- name: logtrail
configMap: logtrail
mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
subPath: logtrail.json
And the last thing you should do is add this ConfigMap resource to Kubernetes:
apiVersion: v1
kind: ConfigMap
metadata:
name: logtrail
namespace: logging
data:
logtrail.json: |
{
"version" : 1,
"index_patterns" : [
{
"es": {
"default_index": "logstash-*"
},
"tail_interval_in_seconds": 10,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "local",
"display_timestamp_format": "MMM DD HH:mm:ss",
"max_buckets": 500,
"default_time_range_in_days" : 0,
"max_hosts": 100,
"max_events_to_keep_in_viewer": 5000,
"fields" : {
"mapping" : {
"timestamp" : "#timestamp",
"hostname" : "kubernetes.host",
"program": "kubernetes.pod_name",
"message": "log"
},
"message_format": "{{{log}}}"
},
"color_mapping" : {
}
}]
}
After that you're ready to helm install with the values file specified via the -f flag.
Getting input with --set that matches to what the example in the values file has is a bit tricky. Following the example we want the values to be:
plugins:
enabled: true
values:
- logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
The plugin.values here is tricky because it is an array, which means you need to enclose with {}. And the relevant entry contains commas, which have to be escaped with backslash. To get it to match you can use:
helm install --name kibana --namespace logging stable/kibana --set plugins.enabled=true,plugins.values={"logtrail\,0.1.30\,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.5.4-0.1.30.zip"}
If you add --dry-run --debug then you can see what the computed values are for any command you run, including with --set, so this can help check the match. This kind of value is easier to set with a custom values file referenced with -f as it avoids having to work out how the --set evaluates to values.